Dec 13 14:05:22.003306 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Dec 13 14:05:22.003324 kernel: Linux version 5.15.173-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP PREEMPT Fri Dec 13 12:58:58 -00 2024 Dec 13 14:05:22.003332 kernel: earlycon: pl11 at MMIO 0x00000000effec000 (options '') Dec 13 14:05:22.003339 kernel: printk: bootconsole [pl11] enabled Dec 13 14:05:22.003344 kernel: efi: EFI v2.70 by EDK II Dec 13 14:05:22.003349 kernel: efi: ACPI 2.0=0x3fd5f018 SMBIOS=0x3e580000 SMBIOS 3.0=0x3e560000 MEMATTR=0x3f214018 RNG=0x3fd5f998 MEMRESERVE=0x3763cf98 Dec 13 14:05:22.003356 kernel: random: crng init done Dec 13 14:05:22.003361 kernel: ACPI: Early table checksum verification disabled Dec 13 14:05:22.003367 kernel: ACPI: RSDP 0x000000003FD5F018 000024 (v02 VRTUAL) Dec 13 14:05:22.003372 kernel: ACPI: XSDT 0x000000003FD5FF18 00006C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 13 14:05:22.003377 kernel: ACPI: FACP 0x000000003FD5FC18 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 13 14:05:22.003383 kernel: ACPI: DSDT 0x000000003FD41018 01DFCD (v02 MSFTVM DSDT01 00000001 INTL 20230628) Dec 13 14:05:22.003389 kernel: ACPI: DBG2 0x000000003FD5FB18 000072 (v00 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 13 14:05:22.003395 kernel: ACPI: GTDT 0x000000003FD5FD98 000060 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 13 14:05:22.003401 kernel: ACPI: OEM0 0x000000003FD5F098 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 13 14:05:22.003407 kernel: ACPI: SPCR 0x000000003FD5FA98 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 13 14:05:22.003413 kernel: ACPI: APIC 0x000000003FD5F818 0000FC (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 13 14:05:22.003420 kernel: ACPI: SRAT 0x000000003FD5F198 000234 (v03 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 13 14:05:22.003426 kernel: ACPI: PPTT 0x000000003FD5F418 000120 (v01 VRTUAL MICROSFT 00000000 MSFT 00000000) Dec 13 14:05:22.003432 kernel: ACPI: BGRT 0x000000003FD5FE98 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 13 14:05:22.003438 kernel: ACPI: SPCR: console: pl011,mmio32,0xeffec000,115200 Dec 13 14:05:22.003444 kernel: NUMA: Failed to initialise from firmware Dec 13 14:05:22.003450 kernel: NUMA: Faking a node at [mem 0x0000000000000000-0x00000001bfffffff] Dec 13 14:05:22.003455 kernel: NUMA: NODE_DATA [mem 0x1bf7f3900-0x1bf7f8fff] Dec 13 14:05:22.003461 kernel: Zone ranges: Dec 13 14:05:22.003467 kernel: DMA [mem 0x0000000000000000-0x00000000ffffffff] Dec 13 14:05:22.003473 kernel: DMA32 empty Dec 13 14:05:22.003478 kernel: Normal [mem 0x0000000100000000-0x00000001bfffffff] Dec 13 14:05:22.003485 kernel: Movable zone start for each node Dec 13 14:05:22.003491 kernel: Early memory node ranges Dec 13 14:05:22.003497 kernel: node 0: [mem 0x0000000000000000-0x00000000007fffff] Dec 13 14:05:22.003503 kernel: node 0: [mem 0x0000000000824000-0x000000003e54ffff] Dec 13 14:05:22.003508 kernel: node 0: [mem 0x000000003e550000-0x000000003e87ffff] Dec 13 14:05:22.003514 kernel: node 0: [mem 0x000000003e880000-0x000000003fc7ffff] Dec 13 14:05:22.003520 kernel: node 0: [mem 0x000000003fc80000-0x000000003fcfffff] Dec 13 14:05:22.003526 kernel: node 0: [mem 0x000000003fd00000-0x000000003fffffff] Dec 13 14:05:22.003531 kernel: node 0: [mem 0x0000000100000000-0x00000001bfffffff] Dec 13 14:05:22.003537 kernel: Initmem setup node 0 [mem 0x0000000000000000-0x00000001bfffffff] Dec 13 14:05:22.003543 kernel: On node 0, zone DMA: 36 pages in unavailable ranges Dec 13 14:05:22.003549 kernel: psci: probing for conduit method from ACPI. Dec 13 14:05:22.003558 kernel: psci: PSCIv1.1 detected in firmware. Dec 13 14:05:22.003564 kernel: psci: Using standard PSCI v0.2 function IDs Dec 13 14:05:22.003571 kernel: psci: MIGRATE_INFO_TYPE not supported. Dec 13 14:05:22.003576 kernel: psci: SMC Calling Convention v1.4 Dec 13 14:05:22.003582 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x0 -> Node -1 Dec 13 14:05:22.003590 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1 -> Node -1 Dec 13 14:05:22.003596 kernel: percpu: Embedded 30 pages/cpu s83032 r8192 d31656 u122880 Dec 13 14:05:22.003602 kernel: pcpu-alloc: s83032 r8192 d31656 u122880 alloc=30*4096 Dec 13 14:05:22.003608 kernel: pcpu-alloc: [0] 0 [0] 1 Dec 13 14:05:22.003614 kernel: Detected PIPT I-cache on CPU0 Dec 13 14:05:22.003620 kernel: CPU features: detected: GIC system register CPU interface Dec 13 14:05:22.003626 kernel: CPU features: detected: Hardware dirty bit management Dec 13 14:05:22.003632 kernel: CPU features: detected: Spectre-BHB Dec 13 14:05:22.003638 kernel: CPU features: kernel page table isolation forced ON by KASLR Dec 13 14:05:22.003644 kernel: CPU features: detected: Kernel page table isolation (KPTI) Dec 13 14:05:22.003650 kernel: CPU features: detected: ARM erratum 1418040 Dec 13 14:05:22.003658 kernel: CPU features: detected: ARM erratum 1542419 (kernel portion) Dec 13 14:05:22.003664 kernel: CPU features: detected: SSBS not fully self-synchronizing Dec 13 14:05:22.003670 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1032156 Dec 13 14:05:22.003676 kernel: Policy zone: Normal Dec 13 14:05:22.003683 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=5997a8cf94b1df1856dc785f0a7074604bbf4c21fdcca24a1996021471a77601 Dec 13 14:05:22.003690 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Dec 13 14:05:22.003696 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Dec 13 14:05:22.003703 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Dec 13 14:05:22.003709 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Dec 13 14:05:22.003715 kernel: software IO TLB: mapped [mem 0x000000003a550000-0x000000003e550000] (64MB) Dec 13 14:05:22.003721 kernel: Memory: 3986944K/4194160K available (9792K kernel code, 2092K rwdata, 7576K rodata, 36416K init, 777K bss, 207216K reserved, 0K cma-reserved) Dec 13 14:05:22.003729 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Dec 13 14:05:22.003735 kernel: trace event string verifier disabled Dec 13 14:05:22.003741 kernel: rcu: Preemptible hierarchical RCU implementation. Dec 13 14:05:22.003748 kernel: rcu: RCU event tracing is enabled. Dec 13 14:05:22.003754 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Dec 13 14:05:22.003760 kernel: Trampoline variant of Tasks RCU enabled. Dec 13 14:05:22.003766 kernel: Tracing variant of Tasks RCU enabled. Dec 13 14:05:22.003772 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Dec 13 14:05:22.003779 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Dec 13 14:05:22.003785 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Dec 13 14:05:22.003791 kernel: GICv3: 960 SPIs implemented Dec 13 14:05:22.003798 kernel: GICv3: 0 Extended SPIs implemented Dec 13 14:05:22.003804 kernel: GICv3: Distributor has no Range Selector support Dec 13 14:05:22.003810 kernel: Root IRQ handler: gic_handle_irq Dec 13 14:05:22.003816 kernel: GICv3: 16 PPIs implemented Dec 13 14:05:22.003822 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000effee000 Dec 13 14:05:22.003828 kernel: ITS: No ITS available, not enabling LPIs Dec 13 14:05:22.003834 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 14:05:22.003840 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Dec 13 14:05:22.003847 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Dec 13 14:05:22.003853 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Dec 13 14:05:22.008902 kernel: Console: colour dummy device 80x25 Dec 13 14:05:22.008921 kernel: printk: console [tty1] enabled Dec 13 14:05:22.008928 kernel: ACPI: Core revision 20210730 Dec 13 14:05:22.008934 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Dec 13 14:05:22.008941 kernel: pid_max: default: 32768 minimum: 301 Dec 13 14:05:22.008947 kernel: LSM: Security Framework initializing Dec 13 14:05:22.008953 kernel: SELinux: Initializing. Dec 13 14:05:22.008960 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Dec 13 14:05:22.008967 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Dec 13 14:05:22.008973 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3a8030, hints 0xe, misc 0x31e1 Dec 13 14:05:22.008981 kernel: Hyper-V: Host Build 10.0.22477.1594-1-0 Dec 13 14:05:22.008988 kernel: rcu: Hierarchical SRCU implementation. Dec 13 14:05:22.008994 kernel: Remapping and enabling EFI services. Dec 13 14:05:22.009001 kernel: smp: Bringing up secondary CPUs ... Dec 13 14:05:22.009007 kernel: Detected PIPT I-cache on CPU1 Dec 13 14:05:22.009013 kernel: GICv3: CPU1: found redistributor 1 region 1:0x00000000f000e000 Dec 13 14:05:22.009020 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 14:05:22.009026 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Dec 13 14:05:22.009033 kernel: smp: Brought up 1 node, 2 CPUs Dec 13 14:05:22.009039 kernel: SMP: Total of 2 processors activated. Dec 13 14:05:22.009047 kernel: CPU features: detected: 32-bit EL0 Support Dec 13 14:05:22.009054 kernel: CPU features: detected: Instruction cache invalidation not required for I/D coherence Dec 13 14:05:22.009060 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Dec 13 14:05:22.009067 kernel: CPU features: detected: CRC32 instructions Dec 13 14:05:22.009073 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Dec 13 14:05:22.009080 kernel: CPU features: detected: LSE atomic instructions Dec 13 14:05:22.009086 kernel: CPU features: detected: Privileged Access Never Dec 13 14:05:22.009092 kernel: CPU: All CPU(s) started at EL1 Dec 13 14:05:22.009098 kernel: alternatives: patching kernel code Dec 13 14:05:22.009106 kernel: devtmpfs: initialized Dec 13 14:05:22.009117 kernel: KASLR enabled Dec 13 14:05:22.009124 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Dec 13 14:05:22.009132 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Dec 13 14:05:22.009139 kernel: pinctrl core: initialized pinctrl subsystem Dec 13 14:05:22.009145 kernel: SMBIOS 3.1.0 present. Dec 13 14:05:22.009152 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 09/28/2024 Dec 13 14:05:22.009159 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Dec 13 14:05:22.009166 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Dec 13 14:05:22.009174 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Dec 13 14:05:22.009181 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Dec 13 14:05:22.009189 kernel: audit: initializing netlink subsys (disabled) Dec 13 14:05:22.009195 kernel: audit: type=2000 audit(0.085:1): state=initialized audit_enabled=0 res=1 Dec 13 14:05:22.009202 kernel: thermal_sys: Registered thermal governor 'step_wise' Dec 13 14:05:22.009209 kernel: cpuidle: using governor menu Dec 13 14:05:22.009215 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Dec 13 14:05:22.009223 kernel: ASID allocator initialised with 32768 entries Dec 13 14:05:22.009230 kernel: ACPI: bus type PCI registered Dec 13 14:05:22.009237 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Dec 13 14:05:22.009243 kernel: Serial: AMBA PL011 UART driver Dec 13 14:05:22.009250 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Dec 13 14:05:22.009256 kernel: HugeTLB registered 32.0 MiB page size, pre-allocated 0 pages Dec 13 14:05:22.009263 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Dec 13 14:05:22.009270 kernel: HugeTLB registered 64.0 KiB page size, pre-allocated 0 pages Dec 13 14:05:22.009276 kernel: cryptd: max_cpu_qlen set to 1000 Dec 13 14:05:22.009285 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Dec 13 14:05:22.009291 kernel: ACPI: Added _OSI(Module Device) Dec 13 14:05:22.009298 kernel: ACPI: Added _OSI(Processor Device) Dec 13 14:05:22.009305 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Dec 13 14:05:22.009312 kernel: ACPI: Added _OSI(Processor Aggregator Device) Dec 13 14:05:22.009318 kernel: ACPI: Added _OSI(Linux-Dell-Video) Dec 13 14:05:22.009325 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Dec 13 14:05:22.009332 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Dec 13 14:05:22.009338 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Dec 13 14:05:22.009346 kernel: ACPI: Interpreter enabled Dec 13 14:05:22.009352 kernel: ACPI: Using GIC for interrupt routing Dec 13 14:05:22.009359 kernel: ARMH0011:00: ttyAMA0 at MMIO 0xeffec000 (irq = 12, base_baud = 0) is a SBSA Dec 13 14:05:22.009366 kernel: printk: console [ttyAMA0] enabled Dec 13 14:05:22.009373 kernel: printk: bootconsole [pl11] disabled Dec 13 14:05:22.009380 kernel: ARMH0011:01: ttyAMA1 at MMIO 0xeffeb000 (irq = 13, base_baud = 0) is a SBSA Dec 13 14:05:22.009386 kernel: iommu: Default domain type: Translated Dec 13 14:05:22.009393 kernel: iommu: DMA domain TLB invalidation policy: strict mode Dec 13 14:05:22.009399 kernel: vgaarb: loaded Dec 13 14:05:22.009406 kernel: pps_core: LinuxPPS API ver. 1 registered Dec 13 14:05:22.009414 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Dec 13 14:05:22.009421 kernel: PTP clock support registered Dec 13 14:05:22.009427 kernel: Registered efivars operations Dec 13 14:05:22.009434 kernel: No ACPI PMU IRQ for CPU0 Dec 13 14:05:22.009440 kernel: No ACPI PMU IRQ for CPU1 Dec 13 14:05:22.009447 kernel: clocksource: Switched to clocksource arch_sys_counter Dec 13 14:05:22.009454 kernel: VFS: Disk quotas dquot_6.6.0 Dec 13 14:05:22.009460 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Dec 13 14:05:22.009468 kernel: pnp: PnP ACPI init Dec 13 14:05:22.009475 kernel: pnp: PnP ACPI: found 0 devices Dec 13 14:05:22.009481 kernel: NET: Registered PF_INET protocol family Dec 13 14:05:22.009488 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Dec 13 14:05:22.009495 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Dec 13 14:05:22.009502 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Dec 13 14:05:22.009508 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Dec 13 14:05:22.009515 kernel: TCP bind hash table entries: 32768 (order: 7, 524288 bytes, linear) Dec 13 14:05:22.009522 kernel: TCP: Hash tables configured (established 32768 bind 32768) Dec 13 14:05:22.009530 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Dec 13 14:05:22.009537 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Dec 13 14:05:22.009544 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Dec 13 14:05:22.009550 kernel: PCI: CLS 0 bytes, default 64 Dec 13 14:05:22.009557 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 1 counters available Dec 13 14:05:22.009564 kernel: kvm [1]: HYP mode not available Dec 13 14:05:22.009570 kernel: Initialise system trusted keyrings Dec 13 14:05:22.009577 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Dec 13 14:05:22.009584 kernel: Key type asymmetric registered Dec 13 14:05:22.009592 kernel: Asymmetric key parser 'x509' registered Dec 13 14:05:22.009598 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Dec 13 14:05:22.009605 kernel: io scheduler mq-deadline registered Dec 13 14:05:22.009611 kernel: io scheduler kyber registered Dec 13 14:05:22.009618 kernel: io scheduler bfq registered Dec 13 14:05:22.009624 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Dec 13 14:05:22.009631 kernel: thunder_xcv, ver 1.0 Dec 13 14:05:22.009638 kernel: thunder_bgx, ver 1.0 Dec 13 14:05:22.009644 kernel: nicpf, ver 1.0 Dec 13 14:05:22.009651 kernel: nicvf, ver 1.0 Dec 13 14:05:22.009780 kernel: rtc-efi rtc-efi.0: registered as rtc0 Dec 13 14:05:22.009840 kernel: rtc-efi rtc-efi.0: setting system clock to 2024-12-13T14:05:21 UTC (1734098721) Dec 13 14:05:22.009850 kernel: efifb: probing for efifb Dec 13 14:05:22.009874 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Dec 13 14:05:22.009884 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Dec 13 14:05:22.009891 kernel: efifb: scrolling: redraw Dec 13 14:05:22.009898 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Dec 13 14:05:22.009907 kernel: Console: switching to colour frame buffer device 128x48 Dec 13 14:05:22.009914 kernel: fb0: EFI VGA frame buffer device Dec 13 14:05:22.009920 kernel: SMCCC: SOC_ID: ARCH_SOC_ID not implemented, skipping .... Dec 13 14:05:22.009927 kernel: hid: raw HID events driver (C) Jiri Kosina Dec 13 14:05:22.009934 kernel: NET: Registered PF_INET6 protocol family Dec 13 14:05:22.009940 kernel: Segment Routing with IPv6 Dec 13 14:05:22.009947 kernel: In-situ OAM (IOAM) with IPv6 Dec 13 14:05:22.009954 kernel: NET: Registered PF_PACKET protocol family Dec 13 14:05:22.009960 kernel: Key type dns_resolver registered Dec 13 14:05:22.009967 kernel: registered taskstats version 1 Dec 13 14:05:22.009975 kernel: Loading compiled-in X.509 certificates Dec 13 14:05:22.009982 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.173-flatcar: e011ba9949ade5a6d03f7a5e28171f7f59e70f8a' Dec 13 14:05:22.009989 kernel: Key type .fscrypt registered Dec 13 14:05:22.009995 kernel: Key type fscrypt-provisioning registered Dec 13 14:05:22.010002 kernel: ima: No TPM chip found, activating TPM-bypass! Dec 13 14:05:22.010008 kernel: ima: Allocated hash algorithm: sha1 Dec 13 14:05:22.010015 kernel: ima: No architecture policies found Dec 13 14:05:22.010022 kernel: clk: Disabling unused clocks Dec 13 14:05:22.010030 kernel: Freeing unused kernel memory: 36416K Dec 13 14:05:22.010037 kernel: Run /init as init process Dec 13 14:05:22.010043 kernel: with arguments: Dec 13 14:05:22.010050 kernel: /init Dec 13 14:05:22.010056 kernel: with environment: Dec 13 14:05:22.010063 kernel: HOME=/ Dec 13 14:05:22.010069 kernel: TERM=linux Dec 13 14:05:22.010075 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Dec 13 14:05:22.010085 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Dec 13 14:05:22.010095 systemd[1]: Detected virtualization microsoft. Dec 13 14:05:22.010103 systemd[1]: Detected architecture arm64. Dec 13 14:05:22.010110 systemd[1]: Running in initrd. Dec 13 14:05:22.010117 systemd[1]: No hostname configured, using default hostname. Dec 13 14:05:22.010124 systemd[1]: Hostname set to . Dec 13 14:05:22.010131 systemd[1]: Initializing machine ID from random generator. Dec 13 14:05:22.010138 systemd[1]: Queued start job for default target initrd.target. Dec 13 14:05:22.010147 systemd[1]: Started systemd-ask-password-console.path. Dec 13 14:05:22.010154 systemd[1]: Reached target cryptsetup.target. Dec 13 14:05:22.010161 systemd[1]: Reached target paths.target. Dec 13 14:05:22.010168 systemd[1]: Reached target slices.target. Dec 13 14:05:22.010175 systemd[1]: Reached target swap.target. Dec 13 14:05:22.010182 systemd[1]: Reached target timers.target. Dec 13 14:05:22.010189 systemd[1]: Listening on iscsid.socket. Dec 13 14:05:22.010196 systemd[1]: Listening on iscsiuio.socket. Dec 13 14:05:22.010205 systemd[1]: Listening on systemd-journald-audit.socket. Dec 13 14:05:22.010212 systemd[1]: Listening on systemd-journald-dev-log.socket. Dec 13 14:05:22.010219 systemd[1]: Listening on systemd-journald.socket. Dec 13 14:05:22.010226 systemd[1]: Listening on systemd-networkd.socket. Dec 13 14:05:22.010233 systemd[1]: Listening on systemd-udevd-control.socket. Dec 13 14:05:22.010241 systemd[1]: Listening on systemd-udevd-kernel.socket. Dec 13 14:05:22.010248 systemd[1]: Reached target sockets.target. Dec 13 14:05:22.010255 systemd[1]: Starting kmod-static-nodes.service... Dec 13 14:05:22.010262 systemd[1]: Finished network-cleanup.service. Dec 13 14:05:22.010270 systemd[1]: Starting systemd-fsck-usr.service... Dec 13 14:05:22.010277 systemd[1]: Starting systemd-journald.service... Dec 13 14:05:22.010284 systemd[1]: Starting systemd-modules-load.service... Dec 13 14:05:22.010291 systemd[1]: Starting systemd-resolved.service... Dec 13 14:05:22.010299 systemd[1]: Starting systemd-vconsole-setup.service... Dec 13 14:05:22.010309 systemd-journald[276]: Journal started Dec 13 14:05:22.010353 systemd-journald[276]: Runtime Journal (/run/log/journal/cf8ae6a81cb449a89ce578cf54f950da) is 8.0M, max 78.5M, 70.5M free. Dec 13 14:05:21.993979 systemd-modules-load[277]: Inserted module 'overlay' Dec 13 14:05:22.036287 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Dec 13 14:05:22.044240 systemd-modules-load[277]: Inserted module 'br_netfilter' Dec 13 14:05:22.053632 kernel: Bridge firewalling registered Dec 13 14:05:22.053654 systemd[1]: Started systemd-journald.service. Dec 13 14:05:22.053076 systemd-resolved[278]: Positive Trust Anchors: Dec 13 14:05:22.053083 systemd-resolved[278]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 14:05:22.074000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:22.093907 kernel: audit: type=1130 audit(1734098722.074:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:22.093926 kernel: SCSI subsystem initialized Dec 13 14:05:22.053111 systemd-resolved[278]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Dec 13 14:05:22.055133 systemd-resolved[278]: Defaulting to hostname 'linux'. Dec 13 14:05:22.172956 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Dec 13 14:05:22.172986 kernel: device-mapper: uevent: version 1.0.3 Dec 13 14:05:22.172995 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Dec 13 14:05:22.173004 kernel: audit: type=1130 audit(1734098722.155:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:22.155000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:22.090710 systemd[1]: Started systemd-resolved.service. Dec 13 14:05:22.176000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:22.172034 systemd[1]: Finished kmod-static-nodes.service. Dec 13 14:05:22.221261 kernel: audit: type=1130 audit(1734098722.176:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:22.221284 kernel: audit: type=1130 audit(1734098722.201:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:22.201000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:22.172040 systemd-modules-load[277]: Inserted module 'dm_multipath' Dec 13 14:05:22.177044 systemd[1]: Finished systemd-fsck-usr.service. Dec 13 14:05:22.254595 kernel: audit: type=1130 audit(1734098722.225:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:22.225000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:22.202101 systemd[1]: Finished systemd-modules-load.service. Dec 13 14:05:22.278067 kernel: audit: type=1130 audit(1734098722.252:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:22.252000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:22.226144 systemd[1]: Finished systemd-vconsole-setup.service. Dec 13 14:05:22.253141 systemd[1]: Reached target nss-lookup.target. Dec 13 14:05:22.277979 systemd[1]: Starting dracut-cmdline-ask.service... Dec 13 14:05:22.283296 systemd[1]: Starting systemd-sysctl.service... Dec 13 14:05:22.292221 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Dec 13 14:05:22.321000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:22.315485 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Dec 13 14:05:22.343968 systemd[1]: Finished dracut-cmdline-ask.service. Dec 13 14:05:22.372982 kernel: audit: type=1130 audit(1734098722.321:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:22.373005 kernel: audit: type=1130 audit(1734098722.352:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:22.352000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:22.353291 systemd[1]: Finished systemd-sysctl.service. Dec 13 14:05:22.381326 systemd[1]: Starting dracut-cmdline.service... Dec 13 14:05:22.405817 kernel: audit: type=1130 audit(1734098722.377:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:22.377000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:22.411174 dracut-cmdline[298]: dracut-dracut-053 Dec 13 14:05:22.416112 dracut-cmdline[298]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=5997a8cf94b1df1856dc785f0a7074604bbf4c21fdcca24a1996021471a77601 Dec 13 14:05:22.501881 kernel: Loading iSCSI transport class v2.0-870. Dec 13 14:05:22.516878 kernel: iscsi: registered transport (tcp) Dec 13 14:05:22.537475 kernel: iscsi: registered transport (qla4xxx) Dec 13 14:05:22.537536 kernel: QLogic iSCSI HBA Driver Dec 13 14:05:22.566989 systemd[1]: Finished dracut-cmdline.service. Dec 13 14:05:22.571000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:22.572640 systemd[1]: Starting dracut-pre-udev.service... Dec 13 14:05:22.627878 kernel: raid6: neonx8 gen() 13829 MB/s Dec 13 14:05:22.645868 kernel: raid6: neonx8 xor() 10846 MB/s Dec 13 14:05:22.665868 kernel: raid6: neonx4 gen() 13551 MB/s Dec 13 14:05:22.686873 kernel: raid6: neonx4 xor() 11323 MB/s Dec 13 14:05:22.706867 kernel: raid6: neonx2 gen() 12972 MB/s Dec 13 14:05:22.726867 kernel: raid6: neonx2 xor() 10388 MB/s Dec 13 14:05:22.747868 kernel: raid6: neonx1 gen() 10546 MB/s Dec 13 14:05:22.767867 kernel: raid6: neonx1 xor() 8789 MB/s Dec 13 14:05:22.787867 kernel: raid6: int64x8 gen() 6272 MB/s Dec 13 14:05:22.808872 kernel: raid6: int64x8 xor() 3542 MB/s Dec 13 14:05:22.828866 kernel: raid6: int64x4 gen() 7220 MB/s Dec 13 14:05:22.848868 kernel: raid6: int64x4 xor() 3859 MB/s Dec 13 14:05:22.869868 kernel: raid6: int64x2 gen() 6153 MB/s Dec 13 14:05:22.889872 kernel: raid6: int64x2 xor() 3318 MB/s Dec 13 14:05:22.909866 kernel: raid6: int64x1 gen() 5047 MB/s Dec 13 14:05:22.935020 kernel: raid6: int64x1 xor() 2647 MB/s Dec 13 14:05:22.935040 kernel: raid6: using algorithm neonx8 gen() 13829 MB/s Dec 13 14:05:22.935056 kernel: raid6: .... xor() 10846 MB/s, rmw enabled Dec 13 14:05:22.939151 kernel: raid6: using neon recovery algorithm Dec 13 14:05:22.959989 kernel: xor: measuring software checksum speed Dec 13 14:05:22.960012 kernel: 8regs : 17209 MB/sec Dec 13 14:05:22.963799 kernel: 32regs : 20707 MB/sec Dec 13 14:05:22.971905 kernel: arm64_neon : 26230 MB/sec Dec 13 14:05:22.971915 kernel: xor: using function: arm64_neon (26230 MB/sec) Dec 13 14:05:23.027874 kernel: Btrfs loaded, crc32c=crc32c-generic, zoned=no, fsverity=no Dec 13 14:05:23.037213 systemd[1]: Finished dracut-pre-udev.service. Dec 13 14:05:23.041000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:23.044000 audit: BPF prog-id=7 op=LOAD Dec 13 14:05:23.044000 audit: BPF prog-id=8 op=LOAD Dec 13 14:05:23.046039 systemd[1]: Starting systemd-udevd.service... Dec 13 14:05:23.060332 systemd-udevd[474]: Using default interface naming scheme 'v252'. Dec 13 14:05:23.065888 systemd[1]: Started systemd-udevd.service. Dec 13 14:05:23.070000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:23.076441 systemd[1]: Starting dracut-pre-trigger.service... Dec 13 14:05:23.095447 dracut-pre-trigger[495]: rd.md=0: removing MD RAID activation Dec 13 14:05:23.124809 systemd[1]: Finished dracut-pre-trigger.service. Dec 13 14:05:23.128000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:23.130320 systemd[1]: Starting systemd-udev-trigger.service... Dec 13 14:05:23.164839 systemd[1]: Finished systemd-udev-trigger.service. Dec 13 14:05:23.176000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:23.209895 kernel: hv_vmbus: Vmbus version:5.3 Dec 13 14:05:23.221662 kernel: hv_vmbus: registering driver hyperv_keyboard Dec 13 14:05:23.221715 kernel: hv_vmbus: registering driver hid_hyperv Dec 13 14:05:23.221724 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/MSFT1000:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 Dec 13 14:05:23.253122 kernel: hv_vmbus: registering driver hv_netvsc Dec 13 14:05:23.253178 kernel: hv_vmbus: registering driver hv_storvsc Dec 13 14:05:23.272276 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 Dec 13 14:05:23.272335 kernel: hid-generic 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Dec 13 14:05:23.274881 kernel: scsi host0: storvsc_host_t Dec 13 14:05:23.289029 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Dec 13 14:05:23.289107 kernel: scsi host1: storvsc_host_t Dec 13 14:05:23.289883 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 0 Dec 13 14:05:23.318195 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Dec 13 14:05:23.318996 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Dec 13 14:05:23.319021 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Dec 13 14:05:23.331875 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Dec 13 14:05:23.361393 kernel: hv_netvsc 002248b8-5942-0022-48b8-5942002248b8 eth0: VF slot 1 added Dec 13 14:05:23.361512 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Dec 13 14:05:23.361606 kernel: sd 0:0:0:0: [sda] Write Protect is off Dec 13 14:05:23.361683 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Dec 13 14:05:23.361765 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Dec 13 14:05:23.361848 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 13 14:05:23.361887 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Dec 13 14:05:23.375587 kernel: hv_vmbus: registering driver hv_pci Dec 13 14:05:23.375654 kernel: hv_pci 39b82140-84b3-4c6f-90ff-7e3c03529812: PCI VMBus probing: Using version 0x10004 Dec 13 14:05:23.483712 kernel: hv_pci 39b82140-84b3-4c6f-90ff-7e3c03529812: PCI host bridge to bus 84b3:00 Dec 13 14:05:23.483812 kernel: pci_bus 84b3:00: root bus resource [mem 0xfc0000000-0xfc00fffff window] Dec 13 14:05:23.483939 kernel: pci_bus 84b3:00: No busn resource found for root bus, will use [bus 00-ff] Dec 13 14:05:23.484011 kernel: pci 84b3:00:02.0: [15b3:1018] type 00 class 0x020000 Dec 13 14:05:23.484098 kernel: pci 84b3:00:02.0: reg 0x10: [mem 0xfc0000000-0xfc00fffff 64bit pref] Dec 13 14:05:23.484172 kernel: pci 84b3:00:02.0: enabling Extended Tags Dec 13 14:05:23.484244 kernel: pci 84b3:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at 84b3:00:02.0 (capable of 126.016 Gb/s with 8.0 GT/s PCIe x16 link) Dec 13 14:05:23.484317 kernel: pci_bus 84b3:00: busn_res: [bus 00-ff] end is updated to 00 Dec 13 14:05:23.484387 kernel: pci 84b3:00:02.0: BAR 0: assigned [mem 0xfc0000000-0xfc00fffff 64bit pref] Dec 13 14:05:23.521890 kernel: mlx5_core 84b3:00:02.0: firmware version: 16.30.1284 Dec 13 14:05:23.738231 kernel: mlx5_core 84b3:00:02.0: MLX5E: StrdRq(1) RqSz(8) StrdSz(2048) RxCqeCmprss(0) Dec 13 14:05:23.738343 kernel: hv_netvsc 002248b8-5942-0022-48b8-5942002248b8 eth0: VF registering: eth1 Dec 13 14:05:23.738423 kernel: mlx5_core 84b3:00:02.0 eth1: joined to eth0 Dec 13 14:05:23.746885 kernel: mlx5_core 84b3:00:02.0 enP33971s1: renamed from eth1 Dec 13 14:05:23.949806 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Dec 13 14:05:24.070884 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by (udev-worker) (527) Dec 13 14:05:24.083079 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Dec 13 14:05:24.270667 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Dec 13 14:05:24.352062 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Dec 13 14:05:24.358453 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Dec 13 14:05:24.373628 systemd[1]: Starting disk-uuid.service... Dec 13 14:05:24.397233 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 13 14:05:24.403892 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 13 14:05:25.412362 disk-uuid[603]: The operation has completed successfully. Dec 13 14:05:25.417483 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 13 14:05:25.463895 systemd[1]: disk-uuid.service: Deactivated successfully. Dec 13 14:05:25.468000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:25.468000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:25.463984 systemd[1]: Finished disk-uuid.service. Dec 13 14:05:25.473446 systemd[1]: Starting verity-setup.service... Dec 13 14:05:25.533918 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Dec 13 14:05:25.884081 systemd[1]: Found device dev-mapper-usr.device. Dec 13 14:05:25.894518 systemd[1]: Mounting sysusr-usr.mount... Dec 13 14:05:25.898191 systemd[1]: Finished verity-setup.service. Dec 13 14:05:25.905000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:25.958888 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Dec 13 14:05:25.958933 systemd[1]: Mounted sysusr-usr.mount. Dec 13 14:05:25.962961 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Dec 13 14:05:25.963676 systemd[1]: Starting ignition-setup.service... Dec 13 14:05:25.971084 systemd[1]: Starting parse-ip-for-networkd.service... Dec 13 14:05:26.010027 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Dec 13 14:05:26.010084 kernel: BTRFS info (device sda6): using free space tree Dec 13 14:05:26.014644 kernel: BTRFS info (device sda6): has skinny extents Dec 13 14:05:26.072429 systemd[1]: Finished parse-ip-for-networkd.service. Dec 13 14:05:26.104055 kernel: kauditd_printk_skb: 10 callbacks suppressed Dec 13 14:05:26.104077 kernel: audit: type=1130 audit(1734098726.077:21): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:26.077000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:26.082231 systemd[1]: Starting systemd-networkd.service... Dec 13 14:05:26.123946 kernel: audit: type=1334 audit(1734098726.080:22): prog-id=9 op=LOAD Dec 13 14:05:26.080000 audit: BPF prog-id=9 op=LOAD Dec 13 14:05:26.130973 systemd-networkd[841]: lo: Link UP Dec 13 14:05:26.130987 systemd-networkd[841]: lo: Gained carrier Dec 13 14:05:26.139000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:26.131398 systemd-networkd[841]: Enumeration completed Dec 13 14:05:26.163565 kernel: audit: type=1130 audit(1734098726.139:23): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:26.134726 systemd[1]: Started systemd-networkd.service. Dec 13 14:05:26.140576 systemd-networkd[841]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 14:05:26.157666 systemd[1]: Reached target network.target. Dec 13 14:05:26.189000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:26.170441 systemd[1]: Starting iscsiuio.service... Dec 13 14:05:26.221944 kernel: audit: type=1130 audit(1734098726.189:24): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:26.221967 iscsid[853]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Dec 13 14:05:26.221967 iscsid[853]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log Dec 13 14:05:26.221967 iscsid[853]: into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Dec 13 14:05:26.221967 iscsid[853]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Dec 13 14:05:26.221967 iscsid[853]: If using hardware iscsi like qla4xxx this message can be ignored. Dec 13 14:05:26.221967 iscsid[853]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Dec 13 14:05:26.221967 iscsid[853]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Dec 13 14:05:26.348912 kernel: audit: type=1130 audit(1734098726.224:25): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:26.348940 kernel: audit: type=1130 audit(1734098726.290:26): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:26.348950 kernel: mlx5_core 84b3:00:02.0 enP33971s1: Link up Dec 13 14:05:26.224000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:26.290000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:26.174534 systemd[1]: mnt-oem.mount: Deactivated successfully. Dec 13 14:05:26.179150 systemd[1]: Started iscsiuio.service. Dec 13 14:05:26.404904 kernel: hv_netvsc 002248b8-5942-0022-48b8-5942002248b8 eth0: Data path switched to VF: enP33971s1 Dec 13 14:05:26.405048 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Dec 13 14:05:26.405058 kernel: audit: type=1130 audit(1734098726.382:27): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:26.382000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:26.212843 systemd[1]: Starting iscsid.service... Dec 13 14:05:26.220912 systemd[1]: Started iscsid.service. Dec 13 14:05:26.416000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:26.226122 systemd[1]: Starting dracut-initqueue.service... Dec 13 14:05:26.441378 kernel: audit: type=1130 audit(1734098726.416:28): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:26.251258 systemd[1]: Finished dracut-initqueue.service. Dec 13 14:05:26.291166 systemd[1]: Reached target remote-fs-pre.target. Dec 13 14:05:26.323799 systemd[1]: Reached target remote-cryptsetup.target. Dec 13 14:05:26.341892 systemd[1]: Reached target remote-fs.target. Dec 13 14:05:26.354044 systemd[1]: Starting dracut-pre-mount.service... Dec 13 14:05:26.369052 systemd[1]: Finished ignition-setup.service. Dec 13 14:05:26.372120 systemd-networkd[841]: enP33971s1: Link UP Dec 13 14:05:26.372298 systemd-networkd[841]: eth0: Link UP Dec 13 14:05:26.381980 systemd-networkd[841]: eth0: Gained carrier Dec 13 14:05:26.385413 systemd-networkd[841]: enP33971s1: Gained carrier Dec 13 14:05:26.403333 systemd[1]: Starting ignition-fetch-offline.service... Dec 13 14:05:26.408299 systemd[1]: Finished dracut-pre-mount.service. Dec 13 14:05:26.441938 systemd-networkd[841]: eth0: DHCPv4 address 10.200.20.41/24, gateway 10.200.20.1 acquired from 168.63.129.16 Dec 13 14:05:28.121980 systemd-networkd[841]: eth0: Gained IPv6LL Dec 13 14:05:31.706464 ignition[868]: Ignition 2.14.0 Dec 13 14:05:31.706474 ignition[868]: Stage: fetch-offline Dec 13 14:05:31.706528 ignition[868]: reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 14:05:31.706549 ignition[868]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Dec 13 14:05:31.811901 ignition[868]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Dec 13 14:05:31.812042 ignition[868]: parsed url from cmdline: "" Dec 13 14:05:31.818625 systemd[1]: Finished ignition-fetch-offline.service. Dec 13 14:05:31.847958 kernel: audit: type=1130 audit(1734098731.823:29): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:31.823000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:31.812045 ignition[868]: no config URL provided Dec 13 14:05:31.825073 systemd[1]: Starting ignition-fetch.service... Dec 13 14:05:31.812050 ignition[868]: reading system config file "/usr/lib/ignition/user.ign" Dec 13 14:05:31.812058 ignition[868]: no config at "/usr/lib/ignition/user.ign" Dec 13 14:05:31.812063 ignition[868]: failed to fetch config: resource requires networking Dec 13 14:05:31.812156 ignition[868]: Ignition finished successfully Dec 13 14:05:31.851142 ignition[874]: Ignition 2.14.0 Dec 13 14:05:31.851148 ignition[874]: Stage: fetch Dec 13 14:05:31.851250 ignition[874]: reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 14:05:31.851272 ignition[874]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Dec 13 14:05:31.857452 ignition[874]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Dec 13 14:05:31.857625 ignition[874]: parsed url from cmdline: "" Dec 13 14:05:31.857629 ignition[874]: no config URL provided Dec 13 14:05:31.857634 ignition[874]: reading system config file "/usr/lib/ignition/user.ign" Dec 13 14:05:31.857652 ignition[874]: no config at "/usr/lib/ignition/user.ign" Dec 13 14:05:31.857686 ignition[874]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Dec 13 14:05:31.975969 ignition[874]: GET result: OK Dec 13 14:05:31.976040 ignition[874]: config has been read from IMDS userdata Dec 13 14:05:31.979187 unknown[874]: fetched base config from "system" Dec 13 14:05:32.012927 kernel: audit: type=1130 audit(1734098731.989:30): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:31.989000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:31.976085 ignition[874]: parsing config with SHA512: 2ddd1f228fcae938cf8a227437fa599dc6441dc41ee2db1396ef40dae7d1b6db50d6c9fb1237a6034debd2d3d0ee2de913188c6610a4ba75c5e9e1367ada0a34 Dec 13 14:05:31.979194 unknown[874]: fetched base config from "system" Dec 13 14:05:31.979748 ignition[874]: fetch: fetch complete Dec 13 14:05:31.979207 unknown[874]: fetched user config from "azure" Dec 13 14:05:31.979754 ignition[874]: fetch: fetch passed Dec 13 14:05:31.985068 systemd[1]: Finished ignition-fetch.service. Dec 13 14:05:32.032000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:31.979789 ignition[874]: Ignition finished successfully Dec 13 14:05:31.990199 systemd[1]: Starting ignition-kargs.service... Dec 13 14:05:32.064599 kernel: audit: type=1130 audit(1734098732.032:31): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:32.019436 ignition[880]: Ignition 2.14.0 Dec 13 14:05:32.028675 systemd[1]: Finished ignition-kargs.service. Dec 13 14:05:32.094002 kernel: audit: type=1130 audit(1734098732.076:32): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:32.076000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:32.019441 ignition[880]: Stage: kargs Dec 13 14:05:32.052286 systemd[1]: Starting ignition-disks.service... Dec 13 14:05:32.019540 ignition[880]: reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 14:05:32.071666 systemd[1]: Finished ignition-disks.service. Dec 13 14:05:32.019558 ignition[880]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Dec 13 14:05:32.076542 systemd[1]: Reached target initrd-root-device.target. Dec 13 14:05:32.023172 ignition[880]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Dec 13 14:05:32.098801 systemd[1]: Reached target local-fs-pre.target. Dec 13 14:05:32.025828 ignition[880]: kargs: kargs passed Dec 13 14:05:32.106145 systemd[1]: Reached target local-fs.target. Dec 13 14:05:32.025905 ignition[880]: Ignition finished successfully Dec 13 14:05:32.114238 systemd[1]: Reached target sysinit.target. Dec 13 14:05:32.061351 ignition[886]: Ignition 2.14.0 Dec 13 14:05:32.124209 systemd[1]: Reached target basic.target. Dec 13 14:05:32.061357 ignition[886]: Stage: disks Dec 13 14:05:32.132876 systemd[1]: Starting systemd-fsck-root.service... Dec 13 14:05:32.061480 ignition[886]: reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 14:05:32.061509 ignition[886]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Dec 13 14:05:32.066216 ignition[886]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Dec 13 14:05:32.068181 ignition[886]: disks: disks passed Dec 13 14:05:32.068233 ignition[886]: Ignition finished successfully Dec 13 14:05:32.208631 systemd-fsck[894]: ROOT: clean, 621/7326000 files, 481076/7359488 blocks Dec 13 14:05:32.219150 systemd[1]: Finished systemd-fsck-root.service. Dec 13 14:05:32.223000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:32.228383 systemd[1]: Mounting sysroot.mount... Dec 13 14:05:32.256127 kernel: audit: type=1130 audit(1734098732.223:33): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:32.268904 kernel: EXT4-fs (sda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Dec 13 14:05:32.268908 systemd[1]: Mounted sysroot.mount. Dec 13 14:05:32.272753 systemd[1]: Reached target initrd-root-fs.target. Dec 13 14:05:32.372608 systemd[1]: Mounting sysroot-usr.mount... Dec 13 14:05:32.377368 systemd[1]: Starting flatcar-metadata-hostname.service... Dec 13 14:05:32.384893 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Dec 13 14:05:32.384924 systemd[1]: Reached target ignition-diskful.target. Dec 13 14:05:32.391084 systemd[1]: Mounted sysroot-usr.mount. Dec 13 14:05:32.464206 systemd[1]: Mounting sysroot-usr-share-oem.mount... Dec 13 14:05:32.469595 systemd[1]: Starting initrd-setup-root.service... Dec 13 14:05:32.500579 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by mount (905) Dec 13 14:05:32.500637 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Dec 13 14:05:32.505472 kernel: BTRFS info (device sda6): using free space tree Dec 13 14:05:32.510169 kernel: BTRFS info (device sda6): has skinny extents Dec 13 14:05:32.510366 initrd-setup-root[910]: cut: /sysroot/etc/passwd: No such file or directory Dec 13 14:05:32.519556 systemd[1]: Mounted sysroot-usr-share-oem.mount. Dec 13 14:05:32.546381 initrd-setup-root[936]: cut: /sysroot/etc/group: No such file or directory Dec 13 14:05:32.579672 initrd-setup-root[944]: cut: /sysroot/etc/shadow: No such file or directory Dec 13 14:05:32.588554 initrd-setup-root[952]: cut: /sysroot/etc/gshadow: No such file or directory Dec 13 14:05:33.466086 systemd[1]: Finished initrd-setup-root.service. Dec 13 14:05:33.470000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:33.471934 systemd[1]: Starting ignition-mount.service... Dec 13 14:05:33.501065 kernel: audit: type=1130 audit(1734098733.470:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:33.499570 systemd[1]: Starting sysroot-boot.service... Dec 13 14:05:33.505557 systemd[1]: sysusr-usr-share-oem.mount: Deactivated successfully. Dec 13 14:05:33.505664 systemd[1]: sysroot-usr-share-oem.mount: Deactivated successfully. Dec 13 14:05:33.532704 systemd[1]: Finished sysroot-boot.service. Dec 13 14:05:33.536000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:33.557884 kernel: audit: type=1130 audit(1734098733.536:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:33.576148 ignition[973]: INFO : Ignition 2.14.0 Dec 13 14:05:33.576148 ignition[973]: INFO : Stage: mount Dec 13 14:05:33.585634 ignition[973]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 14:05:33.585634 ignition[973]: DEBUG : parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Dec 13 14:05:33.585634 ignition[973]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Dec 13 14:05:33.585634 ignition[973]: INFO : mount: mount passed Dec 13 14:05:33.585634 ignition[973]: INFO : Ignition finished successfully Dec 13 14:05:33.644439 kernel: audit: type=1130 audit(1734098733.596:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:33.596000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:33.586728 systemd[1]: Finished ignition-mount.service. Dec 13 14:05:35.202681 coreos-metadata[904]: Dec 13 14:05:35.202 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Dec 13 14:05:35.213021 coreos-metadata[904]: Dec 13 14:05:35.212 INFO Fetch successful Dec 13 14:05:35.246092 coreos-metadata[904]: Dec 13 14:05:35.246 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Dec 13 14:05:35.259446 coreos-metadata[904]: Dec 13 14:05:35.259 INFO Fetch successful Dec 13 14:05:35.290000 coreos-metadata[904]: Dec 13 14:05:35.289 INFO wrote hostname ci-3510.3.6-a-18113e8891 to /sysroot/etc/hostname Dec 13 14:05:35.299328 systemd[1]: Finished flatcar-metadata-hostname.service. Dec 13 14:05:35.304000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:35.325939 systemd[1]: Starting ignition-files.service... Dec 13 14:05:35.336210 kernel: audit: type=1130 audit(1734098735.304:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:35.335740 systemd[1]: Mounting sysroot-usr-share-oem.mount... Dec 13 14:05:35.356925 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by mount (983) Dec 13 14:05:35.369388 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Dec 13 14:05:35.369426 kernel: BTRFS info (device sda6): using free space tree Dec 13 14:05:35.375285 kernel: BTRFS info (device sda6): has skinny extents Dec 13 14:05:35.379356 systemd[1]: Mounted sysroot-usr-share-oem.mount. Dec 13 14:05:35.397221 ignition[1002]: INFO : Ignition 2.14.0 Dec 13 14:05:35.397221 ignition[1002]: INFO : Stage: files Dec 13 14:05:35.406163 ignition[1002]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 14:05:35.406163 ignition[1002]: DEBUG : parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Dec 13 14:05:35.406163 ignition[1002]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Dec 13 14:05:35.406163 ignition[1002]: DEBUG : files: compiled without relabeling support, skipping Dec 13 14:05:35.439106 ignition[1002]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Dec 13 14:05:35.439106 ignition[1002]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Dec 13 14:05:35.570968 ignition[1002]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Dec 13 14:05:35.578480 ignition[1002]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Dec 13 14:05:35.578480 ignition[1002]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Dec 13 14:05:35.575785 unknown[1002]: wrote ssh authorized keys file for user: core Dec 13 14:05:35.602127 ignition[1002]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Dec 13 14:05:35.611910 ignition[1002]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Dec 13 14:05:35.611910 ignition[1002]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Dec 13 14:05:35.611910 ignition[1002]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Dec 13 14:05:35.725000 ignition[1002]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Dec 13 14:05:35.901108 ignition[1002]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Dec 13 14:05:35.911938 ignition[1002]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Dec 13 14:05:35.911938 ignition[1002]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Dec 13 14:05:36.375060 ignition[1002]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET result: OK Dec 13 14:05:36.445095 ignition[1002]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Dec 13 14:05:36.463402 ignition[1002]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/install.sh" Dec 13 14:05:36.463402 ignition[1002]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/install.sh" Dec 13 14:05:36.463402 ignition[1002]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nginx.yaml" Dec 13 14:05:36.463402 ignition[1002]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nginx.yaml" Dec 13 14:05:36.463402 ignition[1002]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 13 14:05:36.463402 ignition[1002]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 13 14:05:36.463402 ignition[1002]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 13 14:05:36.463402 ignition[1002]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 13 14:05:36.463402 ignition[1002]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 14:05:36.463402 ignition[1002]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 14:05:36.463402 ignition[1002]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Dec 13 14:05:36.463402 ignition[1002]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Dec 13 14:05:36.463402 ignition[1002]: INFO : files: createFilesystemsFiles: createFiles: op(c): [started] writing file "/sysroot/etc/systemd/system/waagent.service" Dec 13 14:05:36.463402 ignition[1002]: INFO : files: createFilesystemsFiles: createFiles: op(c): oem config not found in "/usr/share/oem", looking on oem partition Dec 13 14:05:36.463402 ignition[1002]: INFO : files: createFilesystemsFiles: createFiles: op(c): op(d): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3853220196" Dec 13 14:05:36.630751 kernel: BTRFS info: devid 1 device path /dev/sda6 changed to /dev/disk/by-label/OEM scanned by ignition (1005) Dec 13 14:05:36.487123 systemd[1]: mnt-oem3853220196.mount: Deactivated successfully. Dec 13 14:05:36.636736 ignition[1002]: CRITICAL : files: createFilesystemsFiles: createFiles: op(c): op(d): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3853220196": device or resource busy Dec 13 14:05:36.636736 ignition[1002]: ERROR : files: createFilesystemsFiles: createFiles: op(c): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem3853220196", trying btrfs: device or resource busy Dec 13 14:05:36.636736 ignition[1002]: INFO : files: createFilesystemsFiles: createFiles: op(c): op(e): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3853220196" Dec 13 14:05:36.636736 ignition[1002]: INFO : files: createFilesystemsFiles: createFiles: op(c): op(e): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3853220196" Dec 13 14:05:36.636736 ignition[1002]: INFO : files: createFilesystemsFiles: createFiles: op(c): op(f): [started] unmounting "/mnt/oem3853220196" Dec 13 14:05:36.636736 ignition[1002]: INFO : files: createFilesystemsFiles: createFiles: op(c): op(f): [finished] unmounting "/mnt/oem3853220196" Dec 13 14:05:36.636736 ignition[1002]: INFO : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/etc/systemd/system/waagent.service" Dec 13 14:05:36.636736 ignition[1002]: INFO : files: createFilesystemsFiles: createFiles: op(10): [started] writing file "/sysroot/etc/systemd/system/nvidia.service" Dec 13 14:05:36.636736 ignition[1002]: INFO : files: createFilesystemsFiles: createFiles: op(10): oem config not found in "/usr/share/oem", looking on oem partition Dec 13 14:05:36.636736 ignition[1002]: INFO : files: createFilesystemsFiles: createFiles: op(10): op(11): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2436027695" Dec 13 14:05:36.636736 ignition[1002]: CRITICAL : files: createFilesystemsFiles: createFiles: op(10): op(11): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2436027695": device or resource busy Dec 13 14:05:36.636736 ignition[1002]: ERROR : files: createFilesystemsFiles: createFiles: op(10): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem2436027695", trying btrfs: device or resource busy Dec 13 14:05:36.636736 ignition[1002]: INFO : files: createFilesystemsFiles: createFiles: op(10): op(12): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2436027695" Dec 13 14:05:36.636736 ignition[1002]: INFO : files: createFilesystemsFiles: createFiles: op(10): op(12): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2436027695" Dec 13 14:05:36.792954 ignition[1002]: INFO : files: createFilesystemsFiles: createFiles: op(10): op(13): [started] unmounting "/mnt/oem2436027695" Dec 13 14:05:36.792954 ignition[1002]: INFO : files: createFilesystemsFiles: createFiles: op(10): op(13): [finished] unmounting "/mnt/oem2436027695" Dec 13 14:05:36.792954 ignition[1002]: INFO : files: createFilesystemsFiles: createFiles: op(10): [finished] writing file "/sysroot/etc/systemd/system/nvidia.service" Dec 13 14:05:36.792954 ignition[1002]: INFO : files: createFilesystemsFiles: createFiles: op(14): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Dec 13 14:05:36.792954 ignition[1002]: INFO : files: createFilesystemsFiles: createFiles: op(14): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.29.2-arm64.raw: attempt #1 Dec 13 14:05:36.848060 ignition[1002]: INFO : files: createFilesystemsFiles: createFiles: op(14): GET result: OK Dec 13 14:05:37.011294 ignition[1002]: INFO : files: createFilesystemsFiles: createFiles: op(14): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Dec 13 14:05:37.011294 ignition[1002]: INFO : files: op(15): [started] processing unit "waagent.service" Dec 13 14:05:37.011294 ignition[1002]: INFO : files: op(15): [finished] processing unit "waagent.service" Dec 13 14:05:37.011294 ignition[1002]: INFO : files: op(16): [started] processing unit "nvidia.service" Dec 13 14:05:37.011294 ignition[1002]: INFO : files: op(16): [finished] processing unit "nvidia.service" Dec 13 14:05:37.011294 ignition[1002]: INFO : files: op(17): [started] processing unit "containerd.service" Dec 13 14:05:37.092443 kernel: audit: type=1130 audit(1734098737.035:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:37.035000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:37.025324 systemd[1]: Finished ignition-files.service. Dec 13 14:05:37.096000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:37.096000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:37.120738 ignition[1002]: INFO : files: op(17): op(18): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Dec 13 14:05:37.120738 ignition[1002]: INFO : files: op(17): op(18): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Dec 13 14:05:37.120738 ignition[1002]: INFO : files: op(17): [finished] processing unit "containerd.service" Dec 13 14:05:37.120738 ignition[1002]: INFO : files: op(19): [started] processing unit "prepare-helm.service" Dec 13 14:05:37.120738 ignition[1002]: INFO : files: op(19): op(1a): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 13 14:05:37.120738 ignition[1002]: INFO : files: op(19): op(1a): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 13 14:05:37.120738 ignition[1002]: INFO : files: op(19): [finished] processing unit "prepare-helm.service" Dec 13 14:05:37.120738 ignition[1002]: INFO : files: op(1b): [started] setting preset to enabled for "waagent.service" Dec 13 14:05:37.120738 ignition[1002]: INFO : files: op(1b): [finished] setting preset to enabled for "waagent.service" Dec 13 14:05:37.120738 ignition[1002]: INFO : files: op(1c): [started] setting preset to enabled for "nvidia.service" Dec 13 14:05:37.120738 ignition[1002]: INFO : files: op(1c): [finished] setting preset to enabled for "nvidia.service" Dec 13 14:05:37.120738 ignition[1002]: INFO : files: op(1d): [started] setting preset to enabled for "prepare-helm.service" Dec 13 14:05:37.120738 ignition[1002]: INFO : files: op(1d): [finished] setting preset to enabled for "prepare-helm.service" Dec 13 14:05:37.120738 ignition[1002]: INFO : files: createResultFile: createFiles: op(1e): [started] writing file "/sysroot/etc/.ignition-result.json" Dec 13 14:05:37.120738 ignition[1002]: INFO : files: createResultFile: createFiles: op(1e): [finished] writing file "/sysroot/etc/.ignition-result.json" Dec 13 14:05:37.120738 ignition[1002]: INFO : files: files passed Dec 13 14:05:37.120738 ignition[1002]: INFO : Ignition finished successfully Dec 13 14:05:37.397159 kernel: audit: type=1130 audit(1734098737.096:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:37.397189 kernel: audit: type=1131 audit(1734098737.096:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:37.397199 kernel: audit: type=1130 audit(1734098737.139:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:37.397217 kernel: audit: type=1130 audit(1734098737.234:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:37.397227 kernel: audit: type=1131 audit(1734098737.259:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:37.397236 kernel: audit: type=1130 audit(1734098737.366:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:37.139000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:37.234000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:37.259000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:37.366000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:37.060366 systemd[1]: Starting initrd-setup-root-after-ignition.service... Dec 13 14:05:37.065340 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Dec 13 14:05:37.414277 initrd-setup-root-after-ignition[1027]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 13 14:05:37.066530 systemd[1]: Starting ignition-quench.service... Dec 13 14:05:37.085145 systemd[1]: ignition-quench.service: Deactivated successfully. Dec 13 14:05:37.085274 systemd[1]: Finished ignition-quench.service. Dec 13 14:05:37.449000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:37.096803 systemd[1]: Finished initrd-setup-root-after-ignition.service. Dec 13 14:05:37.479344 kernel: audit: type=1131 audit(1734098737.449:45): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:37.140166 systemd[1]: Reached target ignition-complete.target. Dec 13 14:05:37.188756 systemd[1]: Starting initrd-parse-etc.service... Dec 13 14:05:37.220506 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Dec 13 14:05:37.220626 systemd[1]: Finished initrd-parse-etc.service. Dec 13 14:05:37.259724 systemd[1]: Reached target initrd-fs.target. Dec 13 14:05:37.290418 systemd[1]: Reached target initrd.target. Dec 13 14:05:37.301598 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Dec 13 14:05:37.302461 systemd[1]: Starting dracut-pre-pivot.service... Dec 13 14:05:37.352625 systemd[1]: Finished dracut-pre-pivot.service. Dec 13 14:05:37.368110 systemd[1]: Starting initrd-cleanup.service... Dec 13 14:05:37.404510 systemd[1]: Stopped target nss-lookup.target. Dec 13 14:05:37.600274 kernel: audit: type=1131 audit(1734098737.576:46): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:37.576000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:37.418534 systemd[1]: Stopped target remote-cryptsetup.target. Dec 13 14:05:37.431751 systemd[1]: Stopped target timers.target. Dec 13 14:05:37.604000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:37.440701 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Dec 13 14:05:37.630000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:37.639000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:37.440761 systemd[1]: Stopped dracut-pre-pivot.service. Dec 13 14:05:37.655537 kernel: audit: type=1131 audit(1734098737.604:47): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:37.650000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:37.450273 systemd[1]: Stopped target initrd.target. Dec 13 14:05:37.474032 systemd[1]: Stopped target basic.target. Dec 13 14:05:37.483472 systemd[1]: Stopped target ignition-complete.target. Dec 13 14:05:37.682000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:37.686936 ignition[1040]: INFO : Ignition 2.14.0 Dec 13 14:05:37.686936 ignition[1040]: INFO : Stage: umount Dec 13 14:05:37.686936 ignition[1040]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 14:05:37.686936 ignition[1040]: DEBUG : parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Dec 13 14:05:37.686936 ignition[1040]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Dec 13 14:05:37.686936 ignition[1040]: INFO : umount: umount passed Dec 13 14:05:37.686936 ignition[1040]: INFO : Ignition finished successfully Dec 13 14:05:37.691000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:37.700000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:37.700000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:37.709000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:37.721000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:37.726000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:37.746000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:37.766000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:37.495614 systemd[1]: Stopped target ignition-diskful.target. Dec 13 14:05:37.505187 systemd[1]: Stopped target initrd-root-device.target. Dec 13 14:05:37.515141 systemd[1]: Stopped target remote-fs.target. Dec 13 14:05:37.523601 systemd[1]: Stopped target remote-fs-pre.target. Dec 13 14:05:37.532040 systemd[1]: Stopped target sysinit.target. Dec 13 14:05:37.540025 systemd[1]: Stopped target local-fs.target. Dec 13 14:05:37.551579 systemd[1]: Stopped target local-fs-pre.target. Dec 13 14:05:37.560284 systemd[1]: Stopped target swap.target. Dec 13 14:05:37.568512 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Dec 13 14:05:37.838000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:37.568585 systemd[1]: Stopped dracut-pre-mount.service. Dec 13 14:05:37.576910 systemd[1]: Stopped target cryptsetup.target. Dec 13 14:05:37.600608 systemd[1]: dracut-initqueue.service: Deactivated successfully. Dec 13 14:05:37.600667 systemd[1]: Stopped dracut-initqueue.service. Dec 13 14:05:37.605131 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Dec 13 14:05:37.876000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:37.881000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:37.605172 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Dec 13 14:05:37.893000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:37.631850 systemd[1]: ignition-files.service: Deactivated successfully. Dec 13 14:05:37.899000 audit: BPF prog-id=6 op=UNLOAD Dec 13 14:05:37.631910 systemd[1]: Stopped ignition-files.service. Dec 13 14:05:37.912000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:37.640243 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Dec 13 14:05:37.925000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:37.640281 systemd[1]: Stopped flatcar-metadata-hostname.service. Dec 13 14:05:37.936000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:37.652207 systemd[1]: Stopping ignition-mount.service... Dec 13 14:05:37.945000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:37.669223 systemd[1]: Stopping sysroot-boot.service... Dec 13 14:05:37.677337 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Dec 13 14:05:37.677410 systemd[1]: Stopped systemd-udev-trigger.service. Dec 13 14:05:37.970000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:37.682374 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Dec 13 14:05:37.682416 systemd[1]: Stopped dracut-pre-trigger.service. Dec 13 14:05:37.693387 systemd[1]: initrd-cleanup.service: Deactivated successfully. Dec 13 14:05:37.999000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:37.693510 systemd[1]: Finished initrd-cleanup.service. Dec 13 14:05:38.009000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:37.701298 systemd[1]: ignition-mount.service: Deactivated successfully. Dec 13 14:05:38.043633 kernel: hv_netvsc 002248b8-5942-0022-48b8-5942002248b8 eth0: Data path switched from VF: enP33971s1 Dec 13 14:05:38.026000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:38.037000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:37.701401 systemd[1]: Stopped ignition-mount.service. Dec 13 14:05:38.047000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:37.709568 systemd[1]: ignition-disks.service: Deactivated successfully. Dec 13 14:05:38.056000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:37.709625 systemd[1]: Stopped ignition-disks.service. Dec 13 14:05:38.067000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:38.067000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:37.722186 systemd[1]: ignition-kargs.service: Deactivated successfully. Dec 13 14:05:37.722231 systemd[1]: Stopped ignition-kargs.service. Dec 13 14:05:37.727207 systemd[1]: ignition-fetch.service: Deactivated successfully. Dec 13 14:05:37.727241 systemd[1]: Stopped ignition-fetch.service. Dec 13 14:05:37.746304 systemd[1]: Stopped target network.target. Dec 13 14:05:37.757520 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Dec 13 14:05:37.757572 systemd[1]: Stopped ignition-fetch-offline.service. Dec 13 14:05:37.766600 systemd[1]: Stopped target paths.target. Dec 13 14:05:37.775799 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Dec 13 14:05:37.786137 systemd[1]: Stopped systemd-ask-password-console.path. Dec 13 14:05:37.791502 systemd[1]: Stopped target slices.target. Dec 13 14:05:38.125000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:05:37.799754 systemd[1]: Stopped target sockets.target. Dec 13 14:05:37.814125 systemd[1]: iscsid.socket: Deactivated successfully. Dec 13 14:05:37.814158 systemd[1]: Closed iscsid.socket. Dec 13 14:05:37.822209 systemd[1]: iscsiuio.socket: Deactivated successfully. Dec 13 14:05:37.822231 systemd[1]: Closed iscsiuio.socket. Dec 13 14:05:38.159000 audit: BPF prog-id=5 op=UNLOAD Dec 13 14:05:38.159000 audit: BPF prog-id=4 op=UNLOAD Dec 13 14:05:38.159000 audit: BPF prog-id=3 op=UNLOAD Dec 13 14:05:38.159000 audit: BPF prog-id=8 op=UNLOAD Dec 13 14:05:38.159000 audit: BPF prog-id=7 op=UNLOAD Dec 13 14:05:37.830404 systemd[1]: ignition-setup.service: Deactivated successfully. Dec 13 14:05:37.830442 systemd[1]: Stopped ignition-setup.service. Dec 13 14:05:37.838658 systemd[1]: Stopping systemd-networkd.service... Dec 13 14:05:37.848129 systemd[1]: Stopping systemd-resolved.service... Dec 13 14:05:37.857222 systemd-networkd[841]: eth0: DHCPv6 lease lost Dec 13 14:05:38.181000 audit: BPF prog-id=9 op=UNLOAD Dec 13 14:05:37.868563 systemd[1]: sysroot-boot.mount: Deactivated successfully. Dec 13 14:05:37.869087 systemd[1]: systemd-networkd.service: Deactivated successfully. Dec 13 14:05:37.869175 systemd[1]: Stopped systemd-networkd.service. Dec 13 14:05:38.205386 iscsid[853]: iscsid shutting down. Dec 13 14:05:37.877648 systemd[1]: systemd-resolved.service: Deactivated successfully. Dec 13 14:05:37.877742 systemd[1]: Stopped systemd-resolved.service. Dec 13 14:05:37.883156 systemd[1]: sysroot-boot.service: Deactivated successfully. Dec 13 14:05:37.883241 systemd[1]: Stopped sysroot-boot.service. Dec 13 14:05:37.894433 systemd[1]: systemd-networkd.socket: Deactivated successfully. Dec 13 14:05:37.894472 systemd[1]: Closed systemd-networkd.socket. Dec 13 14:05:37.903670 systemd[1]: initrd-setup-root.service: Deactivated successfully. Dec 13 14:05:37.903713 systemd[1]: Stopped initrd-setup-root.service. Dec 13 14:05:37.914035 systemd[1]: Stopping network-cleanup.service... Dec 13 14:05:37.921356 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Dec 13 14:05:37.921417 systemd[1]: Stopped parse-ip-for-networkd.service. Dec 13 14:05:37.926327 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 13 14:05:37.926372 systemd[1]: Stopped systemd-sysctl.service. Dec 13 14:05:37.940922 systemd[1]: systemd-modules-load.service: Deactivated successfully. Dec 13 14:05:37.940970 systemd[1]: Stopped systemd-modules-load.service. Dec 13 14:05:37.946134 systemd[1]: Stopping systemd-udevd.service... Dec 13 14:05:37.955436 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Dec 13 14:05:38.205884 systemd-journald[276]: Received SIGTERM from PID 1 (n/a). Dec 13 14:05:37.960285 systemd[1]: systemd-udevd.service: Deactivated successfully. Dec 13 14:05:37.960434 systemd[1]: Stopped systemd-udevd.service. Dec 13 14:05:37.970644 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Dec 13 14:05:37.970679 systemd[1]: Closed systemd-udevd-control.socket. Dec 13 14:05:37.979519 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Dec 13 14:05:37.979551 systemd[1]: Closed systemd-udevd-kernel.socket. Dec 13 14:05:37.989498 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Dec 13 14:05:37.989546 systemd[1]: Stopped dracut-pre-udev.service. Dec 13 14:05:37.999715 systemd[1]: dracut-cmdline.service: Deactivated successfully. Dec 13 14:05:37.999934 systemd[1]: Stopped dracut-cmdline.service. Dec 13 14:05:38.009479 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 13 14:05:38.009528 systemd[1]: Stopped dracut-cmdline-ask.service. Dec 13 14:05:38.027831 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Dec 13 14:05:38.032780 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Dec 13 14:05:38.032829 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service. Dec 13 14:05:38.043454 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Dec 13 14:05:38.043522 systemd[1]: Stopped kmod-static-nodes.service. Dec 13 14:05:38.048055 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 14:05:38.048101 systemd[1]: Stopped systemd-vconsole-setup.service. Dec 13 14:05:38.057975 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Dec 13 14:05:38.058441 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Dec 13 14:05:38.058519 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Dec 13 14:05:38.117107 systemd[1]: network-cleanup.service: Deactivated successfully. Dec 13 14:05:38.117198 systemd[1]: Stopped network-cleanup.service. Dec 13 14:05:38.125534 systemd[1]: Reached target initrd-switch-root.target. Dec 13 14:05:38.135203 systemd[1]: Starting initrd-switch-root.service... Dec 13 14:05:38.159781 systemd[1]: Switching root. Dec 13 14:05:38.206541 systemd-journald[276]: Journal stopped Dec 13 14:06:00.508489 kernel: SELinux: Class mctp_socket not defined in policy. Dec 13 14:06:00.508510 kernel: SELinux: Class anon_inode not defined in policy. Dec 13 14:06:00.508520 kernel: SELinux: the above unknown classes and permissions will be allowed Dec 13 14:06:00.508530 kernel: SELinux: policy capability network_peer_controls=1 Dec 13 14:06:00.508538 kernel: SELinux: policy capability open_perms=1 Dec 13 14:06:00.508546 kernel: SELinux: policy capability extended_socket_class=1 Dec 13 14:06:00.508555 kernel: SELinux: policy capability always_check_network=0 Dec 13 14:06:00.508563 kernel: SELinux: policy capability cgroup_seclabel=1 Dec 13 14:06:00.508571 kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 13 14:06:00.508578 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Dec 13 14:06:00.508587 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Dec 13 14:06:00.508597 systemd[1]: Successfully loaded SELinux policy in 129.373ms. Dec 13 14:06:00.508608 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 10.868ms. Dec 13 14:06:00.508618 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Dec 13 14:06:00.508628 systemd[1]: Detected virtualization microsoft. Dec 13 14:06:00.508638 systemd[1]: Detected architecture arm64. Dec 13 14:06:00.508647 systemd[1]: Detected first boot. Dec 13 14:06:00.508656 systemd[1]: Hostname set to . Dec 13 14:06:00.508665 systemd[1]: Initializing machine ID from random generator. Dec 13 14:06:00.508674 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Dec 13 14:06:00.508682 kernel: kauditd_printk_skb: 39 callbacks suppressed Dec 13 14:06:00.508692 kernel: audit: type=1400 audit(1734098744.873:87): avc: denied { associate } for pid=1092 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Dec 13 14:06:00.508704 kernel: audit: type=1300 audit(1734098744.873:87): arch=c00000b7 syscall=5 success=yes exit=0 a0=400014766c a1=40000c8af8 a2=40000cea00 a3=32 items=0 ppid=1075 pid=1092 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:06:00.508713 kernel: audit: type=1327 audit(1734098744.873:87): proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Dec 13 14:06:00.508723 kernel: audit: type=1400 audit(1734098744.883:88): avc: denied { associate } for pid=1092 comm="torcx-generator" name="lib" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Dec 13 14:06:00.508732 kernel: audit: type=1300 audit(1734098744.883:88): arch=c00000b7 syscall=34 success=yes exit=0 a0=ffffffffffffff9c a1=4000147745 a2=1ed a3=0 items=2 ppid=1075 pid=1092 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:06:00.508741 kernel: audit: type=1307 audit(1734098744.883:88): cwd="/" Dec 13 14:06:00.508751 kernel: audit: type=1302 audit(1734098744.883:88): item=0 name=(null) inode=2 dev=00:29 mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:06:00.508760 kernel: audit: type=1302 audit(1734098744.883:88): item=1 name=(null) inode=3 dev=00:29 mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:06:00.508769 kernel: audit: type=1327 audit(1734098744.883:88): proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Dec 13 14:06:00.508778 systemd[1]: Populated /etc with preset unit settings. Dec 13 14:06:00.508787 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Dec 13 14:06:00.508797 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 13 14:06:00.508807 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 14:06:00.508818 systemd[1]: Queued start job for default target multi-user.target. Dec 13 14:06:00.508827 systemd[1]: Created slice system-addon\x2dconfig.slice. Dec 13 14:06:00.508836 systemd[1]: Created slice system-addon\x2drun.slice. Dec 13 14:06:00.508845 systemd[1]: Created slice system-getty.slice. Dec 13 14:06:00.508854 systemd[1]: Created slice system-modprobe.slice. Dec 13 14:06:00.508876 systemd[1]: Created slice system-serial\x2dgetty.slice. Dec 13 14:06:00.508888 systemd[1]: Created slice system-system\x2dcloudinit.slice. Dec 13 14:06:00.508898 systemd[1]: Created slice system-systemd\x2dfsck.slice. Dec 13 14:06:00.508909 systemd[1]: Created slice user.slice. Dec 13 14:06:00.508919 systemd[1]: Started systemd-ask-password-console.path. Dec 13 14:06:00.508928 systemd[1]: Started systemd-ask-password-wall.path. Dec 13 14:06:00.508937 systemd[1]: Set up automount boot.automount. Dec 13 14:06:00.508946 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Dec 13 14:06:00.508956 systemd[1]: Reached target integritysetup.target. Dec 13 14:06:00.508965 systemd[1]: Reached target remote-cryptsetup.target. Dec 13 14:06:00.508974 systemd[1]: Reached target remote-fs.target. Dec 13 14:06:00.508983 systemd[1]: Reached target slices.target. Dec 13 14:06:00.508994 systemd[1]: Reached target swap.target. Dec 13 14:06:00.509003 systemd[1]: Reached target torcx.target. Dec 13 14:06:00.509013 systemd[1]: Reached target veritysetup.target. Dec 13 14:06:00.509022 systemd[1]: Listening on systemd-coredump.socket. Dec 13 14:06:00.509031 systemd[1]: Listening on systemd-initctl.socket. Dec 13 14:06:00.509041 kernel: audit: type=1400 audit(1734098759.784:89): avc: denied { audit_read } for pid=1 comm="systemd" capability=37 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Dec 13 14:06:00.509050 systemd[1]: Listening on systemd-journald-audit.socket. Dec 13 14:06:00.509061 kernel: audit: type=1335 audit(1734098759.784:90): pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Dec 13 14:06:00.509070 systemd[1]: Listening on systemd-journald-dev-log.socket. Dec 13 14:06:00.509079 systemd[1]: Listening on systemd-journald.socket. Dec 13 14:06:00.509088 systemd[1]: Listening on systemd-networkd.socket. Dec 13 14:06:00.509098 systemd[1]: Listening on systemd-udevd-control.socket. Dec 13 14:06:00.509108 systemd[1]: Listening on systemd-udevd-kernel.socket. Dec 13 14:06:00.509119 systemd[1]: Listening on systemd-userdbd.socket. Dec 13 14:06:00.509128 systemd[1]: Mounting dev-hugepages.mount... Dec 13 14:06:00.509138 systemd[1]: Mounting dev-mqueue.mount... Dec 13 14:06:00.509147 systemd[1]: Mounting media.mount... Dec 13 14:06:00.509156 systemd[1]: Mounting sys-kernel-debug.mount... Dec 13 14:06:00.509166 systemd[1]: Mounting sys-kernel-tracing.mount... Dec 13 14:06:00.509175 systemd[1]: Mounting tmp.mount... Dec 13 14:06:00.509186 systemd[1]: Starting flatcar-tmpfiles.service... Dec 13 14:06:00.509196 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 14:06:00.509205 systemd[1]: Starting kmod-static-nodes.service... Dec 13 14:06:00.509215 systemd[1]: Starting modprobe@configfs.service... Dec 13 14:06:00.509224 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 14:06:00.509233 systemd[1]: Starting modprobe@drm.service... Dec 13 14:06:00.509243 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 14:06:00.509252 systemd[1]: Starting modprobe@fuse.service... Dec 13 14:06:00.509261 systemd[1]: Starting modprobe@loop.service... Dec 13 14:06:00.509272 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Dec 13 14:06:00.509282 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Dec 13 14:06:00.509291 systemd[1]: (This warning is only shown for the first unit using IP firewalling.) Dec 13 14:06:00.509301 systemd[1]: Starting systemd-journald.service... Dec 13 14:06:00.509311 systemd[1]: Starting systemd-modules-load.service... Dec 13 14:06:00.509320 systemd[1]: Starting systemd-network-generator.service... Dec 13 14:06:00.509329 systemd[1]: Starting systemd-remount-fs.service... Dec 13 14:06:00.509339 systemd[1]: Starting systemd-udev-trigger.service... Dec 13 14:06:00.509348 systemd[1]: Mounted dev-hugepages.mount. Dec 13 14:06:00.509359 systemd[1]: Mounted dev-mqueue.mount. Dec 13 14:06:00.509368 systemd[1]: Mounted media.mount. Dec 13 14:06:00.509377 systemd[1]: Mounted sys-kernel-debug.mount. Dec 13 14:06:00.509387 systemd[1]: Mounted sys-kernel-tracing.mount. Dec 13 14:06:00.509397 systemd[1]: Mounted tmp.mount. Dec 13 14:06:00.509406 systemd[1]: Finished kmod-static-nodes.service. Dec 13 14:06:00.509416 kernel: audit: type=1130 audit(1734098760.229:91): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:06:00.509425 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 14:06:00.509435 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 14:06:00.509444 kernel: loop: module loaded Dec 13 14:06:00.509454 kernel: audit: type=1130 audit(1734098760.276:92): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:06:00.509463 systemd[1]: modprobe@configfs.service: Deactivated successfully. Dec 13 14:06:00.509473 kernel: audit: type=1131 audit(1734098760.276:93): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:06:00.509482 systemd[1]: Finished modprobe@configfs.service. Dec 13 14:06:00.509491 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 14:06:00.509501 kernel: audit: type=1130 audit(1734098760.329:94): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:06:00.509513 systemd[1]: Finished modprobe@drm.service. Dec 13 14:06:00.509522 kernel: fuse: init (API version 7.34) Dec 13 14:06:00.509531 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 14:06:00.509541 kernel: audit: type=1131 audit(1734098760.329:95): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:06:00.509550 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 14:06:00.509559 kernel: audit: type=1130 audit(1734098760.359:96): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:06:00.509568 systemd[1]: modprobe@fuse.service: Deactivated successfully. Dec 13 14:06:00.509578 kernel: audit: type=1131 audit(1734098760.359:97): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:06:00.509589 systemd[1]: Finished modprobe@fuse.service. Dec 13 14:06:00.509598 kernel: audit: type=1130 audit(1734098760.398:98): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:06:00.509607 systemd[1]: Finished flatcar-tmpfiles.service. Dec 13 14:06:00.509617 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 14:06:00.509626 systemd[1]: Finished modprobe@loop.service. Dec 13 14:06:00.509639 systemd[1]: Finished systemd-modules-load.service. Dec 13 14:06:00.509653 systemd-journald[1183]: Journal started Dec 13 14:06:00.509694 systemd-journald[1183]: Runtime Journal (/run/log/journal/f13df39947d946f1af26453173ae1e44) is 8.0M, max 78.5M, 70.5M free. Dec 13 14:05:59.784000 audit[1]: EVENT_LISTENER pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Dec 13 14:06:00.229000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:06:00.276000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:06:00.276000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:06:00.329000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:06:00.329000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:06:00.359000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:06:00.359000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:06:00.398000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:06:00.398000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:06:00.470000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:06:00.470000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:06:00.481000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:06:00.501000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:06:00.501000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:06:00.505000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Dec 13 14:06:00.505000 audit[1183]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=60 a0=5 a1=ffffe1c82330 a2=4000 a3=1 items=0 ppid=1 pid=1183 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:06:00.505000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Dec 13 14:06:00.511000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:06:00.521644 systemd[1]: Started systemd-journald.service. Dec 13 14:06:00.521000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:06:00.522906 systemd[1]: Finished systemd-network-generator.service. Dec 13 14:06:00.527000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:06:00.528314 systemd[1]: Finished systemd-remount-fs.service. Dec 13 14:06:00.532000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:06:00.533546 systemd[1]: Finished systemd-udev-trigger.service. Dec 13 14:06:00.537000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:06:00.538931 systemd[1]: Reached target network-pre.target. Dec 13 14:06:00.544610 systemd[1]: Mounting sys-fs-fuse-connections.mount... Dec 13 14:06:00.550232 systemd[1]: Mounting sys-kernel-config.mount... Dec 13 14:06:00.554416 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Dec 13 14:06:00.556343 systemd[1]: Starting systemd-hwdb-update.service... Dec 13 14:06:00.561960 systemd[1]: Starting systemd-journal-flush.service... Dec 13 14:06:00.566673 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 14:06:00.567940 systemd[1]: Starting systemd-random-seed.service... Dec 13 14:06:00.572349 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Dec 13 14:06:00.573503 systemd[1]: Starting systemd-sysctl.service... Dec 13 14:06:00.578567 systemd[1]: Starting systemd-sysusers.service... Dec 13 14:06:00.584057 systemd[1]: Starting systemd-udev-settle.service... Dec 13 14:06:00.590479 systemd[1]: Mounted sys-fs-fuse-connections.mount. Dec 13 14:06:00.595314 systemd[1]: Mounted sys-kernel-config.mount. Dec 13 14:06:00.601591 udevadm[1242]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Dec 13 14:06:00.663656 systemd-journald[1183]: Time spent on flushing to /var/log/journal/f13df39947d946f1af26453173ae1e44 is 12.651ms for 1041 entries. Dec 13 14:06:00.663656 systemd-journald[1183]: System Journal (/var/log/journal/f13df39947d946f1af26453173ae1e44) is 8.0M, max 2.6G, 2.6G free. Dec 13 14:06:01.073923 systemd-journald[1183]: Received client request to flush runtime journal. Dec 13 14:06:00.720000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:06:00.730000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:06:01.079000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:06:00.713083 systemd[1]: Finished systemd-random-seed.service. Dec 13 14:06:00.721756 systemd[1]: Reached target first-boot-complete.target. Dec 13 14:06:00.727374 systemd[1]: Finished systemd-sysctl.service. Dec 13 14:06:01.074932 systemd[1]: Finished systemd-journal-flush.service. Dec 13 14:06:01.876976 systemd[1]: Finished systemd-sysusers.service. Dec 13 14:06:01.880000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:06:01.883202 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Dec 13 14:06:02.716049 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Dec 13 14:06:02.720000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:06:04.027239 systemd[1]: Finished systemd-hwdb-update.service. Dec 13 14:06:04.031000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:06:04.033376 systemd[1]: Starting systemd-udevd.service... Dec 13 14:06:04.051576 systemd-udevd[1253]: Using default interface naming scheme 'v252'. Dec 13 14:06:04.098259 systemd[1]: Started systemd-udevd.service. Dec 13 14:06:04.105000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:06:04.108655 systemd[1]: Starting systemd-networkd.service... Dec 13 14:06:04.135784 systemd[1]: Found device dev-ttyAMA0.device. Dec 13 14:06:04.144414 systemd[1]: Starting systemd-userdbd.service... Dec 13 14:06:04.187889 kernel: mousedev: PS/2 mouse device common for all mice Dec 13 14:06:04.211572 systemd[1]: Started systemd-userdbd.service. Dec 13 14:06:04.215000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:06:04.211000 audit[1265]: AVC avc: denied { confidentiality } for pid=1265 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Dec 13 14:06:04.275716 kernel: hv_vmbus: registering driver hv_balloon Dec 13 14:06:04.275792 kernel: hv_utils: Registering HyperV Utility Driver Dec 13 14:06:04.275831 kernel: hv_vmbus: registering driver hyperv_fb Dec 13 14:06:04.275848 kernel: hv_vmbus: registering driver hv_utils Dec 13 14:06:04.293969 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Dec 13 14:06:04.294060 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Dec 13 14:06:04.294075 kernel: hv_balloon: Memory hot add disabled on ARM64 Dec 13 14:06:04.294088 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Dec 13 14:06:04.310393 kernel: hv_utils: Heartbeat IC version 3.0 Dec 13 14:06:04.310482 kernel: hv_utils: Shutdown IC version 3.2 Dec 13 14:06:04.310902 kernel: hv_utils: TimeSync IC version 4.0 Dec 13 14:06:04.766646 kernel: Console: switching to colour dummy device 80x25 Dec 13 14:06:04.772087 kernel: Console: switching to colour frame buffer device 128x48 Dec 13 14:06:04.211000 audit[1265]: SYSCALL arch=c00000b7 syscall=105 success=yes exit=0 a0=aaaae7266d50 a1=aa2c a2=ffff84b424b0 a3=aaaae71c3010 items=12 ppid=1253 pid=1265 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:06:04.211000 audit: CWD cwd="/" Dec 13 14:06:04.211000 audit: PATH item=0 name=(null) inode=6696 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:06:04.211000 audit: PATH item=1 name=(null) inode=10832 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:06:04.211000 audit: PATH item=2 name=(null) inode=10832 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:06:04.211000 audit: PATH item=3 name=(null) inode=10833 dev=00:0a mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:06:04.211000 audit: PATH item=4 name=(null) inode=10832 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:06:04.211000 audit: PATH item=5 name=(null) inode=10834 dev=00:0a mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:06:04.211000 audit: PATH item=6 name=(null) inode=10832 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:06:04.211000 audit: PATH item=7 name=(null) inode=10835 dev=00:0a mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:06:04.211000 audit: PATH item=8 name=(null) inode=10832 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:06:04.211000 audit: PATH item=9 name=(null) inode=10836 dev=00:0a mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:06:04.211000 audit: PATH item=10 name=(null) inode=10832 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:06:04.211000 audit: PATH item=11 name=(null) inode=10837 dev=00:0a mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:06:04.211000 audit: PROCTITLE proctitle="(udev-worker)" Dec 13 14:06:04.818225 kernel: BTRFS info: devid 1 device path /dev/disk/by-label/OEM changed to /dev/sda6 scanned by (udev-worker) (1258) Dec 13 14:06:04.853862 systemd[1]: dev-disk-by\x2dlabel-OEM.device was skipped because of an unmet condition check (ConditionPathExists=!/usr/.noupdate). Dec 13 14:06:04.855769 systemd[1]: Finished systemd-udev-settle.service. Dec 13 14:06:04.859000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:06:04.862226 systemd[1]: Starting lvm2-activation-early.service... Dec 13 14:06:04.870804 systemd-networkd[1274]: lo: Link UP Dec 13 14:06:04.871091 systemd-networkd[1274]: lo: Gained carrier Dec 13 14:06:04.871546 systemd-networkd[1274]: Enumeration completed Dec 13 14:06:04.871715 systemd[1]: Started systemd-networkd.service. Dec 13 14:06:04.875000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:06:04.877680 systemd[1]: Starting systemd-networkd-wait-online.service... Dec 13 14:06:04.879660 systemd-networkd[1274]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 14:06:04.925639 lvm[1332]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 14:06:04.933091 kernel: mlx5_core 84b3:00:02.0 enP33971s1: Link up Dec 13 14:06:04.959216 kernel: hv_netvsc 002248b8-5942-0022-48b8-5942002248b8 eth0: Data path switched to VF: enP33971s1 Dec 13 14:06:04.959029 systemd-networkd[1274]: enP33971s1: Link UP Dec 13 14:06:04.959147 systemd-networkd[1274]: eth0: Link UP Dec 13 14:06:04.959150 systemd-networkd[1274]: eth0: Gained carrier Dec 13 14:06:04.960388 systemd[1]: Finished lvm2-activation-early.service. Dec 13 14:06:04.964000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:06:04.965894 systemd[1]: Reached target cryptsetup.target. Dec 13 14:06:04.971402 systemd-networkd[1274]: enP33971s1: Gained carrier Dec 13 14:06:04.971626 systemd[1]: Starting lvm2-activation.service... Dec 13 14:06:04.979576 lvm[1335]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 14:06:04.984195 systemd-networkd[1274]: eth0: DHCPv4 address 10.200.20.41/24, gateway 10.200.20.1 acquired from 168.63.129.16 Dec 13 14:06:05.004143 systemd[1]: Finished lvm2-activation.service. Dec 13 14:06:05.007000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:06:05.008751 systemd[1]: Reached target local-fs-pre.target. Dec 13 14:06:05.013519 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Dec 13 14:06:05.013550 systemd[1]: Reached target local-fs.target. Dec 13 14:06:05.017773 systemd[1]: Reached target machines.target. Dec 13 14:06:05.023193 systemd[1]: Starting ldconfig.service... Dec 13 14:06:05.027141 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 14:06:05.027205 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 14:06:05.028302 systemd[1]: Starting systemd-boot-update.service... Dec 13 14:06:05.033355 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Dec 13 14:06:05.039945 systemd[1]: Starting systemd-machine-id-commit.service... Dec 13 14:06:05.046459 systemd[1]: Starting systemd-sysext.service... Dec 13 14:06:05.050799 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1338 (bootctl) Dec 13 14:06:05.051904 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Dec 13 14:06:05.348549 systemd[1]: Unmounting usr-share-oem.mount... Dec 13 14:06:05.390149 kernel: kauditd_printk_skb: 42 callbacks suppressed Dec 13 14:06:05.390209 kernel: audit: type=1130 audit(1734098765.361:124): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:06:05.361000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:06:05.356514 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Dec 13 14:06:05.362835 systemd[1]: usr-share-oem.mount: Deactivated successfully. Dec 13 14:06:05.363079 systemd[1]: Unmounted usr-share-oem.mount. Dec 13 14:06:05.412102 kernel: loop0: detected capacity change from 0 to 194512 Dec 13 14:06:05.429333 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Dec 13 14:06:05.450263 systemd-fsck[1347]: fsck.fat 4.2 (2021-01-31) Dec 13 14:06:05.450263 systemd-fsck[1347]: /dev/sda1: 236 files, 117175/258078 clusters Dec 13 14:06:05.452120 kernel: loop1: detected capacity change from 0 to 194512 Dec 13 14:06:05.454250 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Dec 13 14:06:05.483461 kernel: audit: type=1130 audit(1734098765.459:125): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:06:05.459000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:06:05.462348 systemd[1]: Mounting boot.mount... Dec 13 14:06:05.486008 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Dec 13 14:06:05.486942 systemd[1]: Finished systemd-machine-id-commit.service. Dec 13 14:06:05.489346 (sd-sysext)[1353]: Using extensions 'kubernetes'. Dec 13 14:06:05.489691 (sd-sysext)[1353]: Merged extensions into '/usr'. Dec 13 14:06:05.495000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:06:05.514102 kernel: audit: type=1130 audit(1734098765.495:126): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:06:05.517897 systemd[1]: Mounted boot.mount. Dec 13 14:06:05.529267 systemd[1]: Finished systemd-boot-update.service. Dec 13 14:06:05.533000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:06:05.536296 systemd[1]: Mounting usr-share-oem.mount... Dec 13 14:06:05.551096 kernel: audit: type=1130 audit(1734098765.533:127): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:06:05.556573 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 14:06:05.558017 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 14:06:05.563636 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 14:06:05.569119 systemd[1]: Starting modprobe@loop.service... Dec 13 14:06:05.573266 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 14:06:05.573460 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 14:06:05.576879 systemd[1]: Mounted usr-share-oem.mount. Dec 13 14:06:05.582237 systemd[1]: Finished systemd-sysext.service. Dec 13 14:06:05.585000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:06:05.587304 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 14:06:05.587466 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 14:06:05.608462 kernel: audit: type=1130 audit(1734098765.585:128): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:06:05.608000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:06:05.609543 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 14:06:05.609815 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 14:06:05.608000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:06:05.647569 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 14:06:05.647880 systemd[1]: Finished modprobe@loop.service. Dec 13 14:06:05.648359 kernel: audit: type=1130 audit(1734098765.608:129): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:06:05.648417 kernel: audit: type=1131 audit(1734098765.608:130): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:06:05.646000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:06:05.671282 kernel: audit: type=1130 audit(1734098765.646:131): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:06:05.646000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:06:05.675861 systemd[1]: Starting ensure-sysext.service... Dec 13 14:06:05.691144 kernel: audit: type=1131 audit(1734098765.646:132): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:06:05.669000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:06:05.710782 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 14:06:05.710958 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Dec 13 14:06:05.711969 kernel: audit: type=1130 audit(1734098765.669:133): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:06:05.669000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:06:05.712679 systemd[1]: Starting systemd-tmpfiles-setup.service... Dec 13 14:06:05.726853 systemd[1]: Reloading. Dec 13 14:06:05.728526 systemd-tmpfiles[1376]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Dec 13 14:06:05.732479 systemd-tmpfiles[1376]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Dec 13 14:06:05.737226 systemd-tmpfiles[1376]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Dec 13 14:06:05.789695 /usr/lib/systemd/system-generators/torcx-generator[1395]: time="2024-12-13T14:06:05Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.6 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.6 /var/lib/torcx/store]" Dec 13 14:06:05.789736 /usr/lib/systemd/system-generators/torcx-generator[1395]: time="2024-12-13T14:06:05Z" level=info msg="torcx already run" Dec 13 14:06:05.862531 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Dec 13 14:06:05.862548 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 13 14:06:05.879679 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 14:06:05.942365 systemd[1]: Finished systemd-tmpfiles-setup.service. Dec 13 14:06:05.946000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:06:05.951009 systemd[1]: Starting audit-rules.service... Dec 13 14:06:05.956644 systemd[1]: Starting clean-ca-certificates.service... Dec 13 14:06:05.966452 systemd[1]: Starting systemd-journal-catalog-update.service... Dec 13 14:06:05.978256 systemd[1]: Starting systemd-resolved.service... Dec 13 14:06:05.985043 systemd[1]: Starting systemd-timesyncd.service... Dec 13 14:06:05.991461 systemd[1]: Starting systemd-update-utmp.service... Dec 13 14:06:05.999619 systemd[1]: Finished clean-ca-certificates.service. Dec 13 14:06:05.998000 audit[1471]: SYSTEM_BOOT pid=1471 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Dec 13 14:06:06.006000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:06:06.011578 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 14:06:06.017100 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 14:06:06.019086 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 14:06:06.025590 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 14:06:06.032816 systemd[1]: Starting modprobe@loop.service... Dec 13 14:06:06.039540 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 14:06:06.039939 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 14:06:06.040109 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 14:06:06.041326 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 14:06:06.041504 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 14:06:06.047000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:06:06.048000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:06:06.050845 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 14:06:06.051019 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 14:06:06.056000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:06:06.056000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:06:06.058132 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 14:06:06.058468 systemd[1]: Finished modprobe@loop.service. Dec 13 14:06:06.063000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:06:06.063000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:06:06.065716 systemd[1]: Finished systemd-journal-catalog-update.service. Dec 13 14:06:06.074000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:06:06.076245 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 14:06:06.076540 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Dec 13 14:06:06.078348 systemd[1]: Finished systemd-update-utmp.service. Dec 13 14:06:06.087000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:06:06.090355 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 14:06:06.091967 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 14:06:06.099447 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 14:06:06.111299 systemd[1]: Starting modprobe@loop.service... Dec 13 14:06:06.118132 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 14:06:06.118490 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 14:06:06.118595 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 14:06:06.119478 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 14:06:06.119685 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 14:06:06.128000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:06:06.128000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:06:06.130155 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 14:06:06.130330 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 14:06:06.134000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:06:06.135000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:06:06.136663 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 14:06:06.137006 systemd[1]: Finished modprobe@loop.service. Dec 13 14:06:06.143000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:06:06.143000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:06:06.145214 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 14:06:06.145495 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Dec 13 14:06:06.149946 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 14:06:06.152667 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 14:06:06.159791 systemd[1]: Starting modprobe@drm.service... Dec 13 14:06:06.161000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Dec 13 14:06:06.161000 audit[1499]: SYSCALL arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=fffffe9ca5f0 a2=420 a3=0 items=0 ppid=1463 pid=1499 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:06:06.161000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Dec 13 14:06:06.163109 augenrules[1499]: No rules Dec 13 14:06:06.168485 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 14:06:06.178908 systemd[1]: Starting modprobe@loop.service... Dec 13 14:06:06.185408 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 14:06:06.185572 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 14:06:06.185697 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 14:06:06.187438 systemd[1]: Finished audit-rules.service. Dec 13 14:06:06.194834 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 14:06:06.195052 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 14:06:06.200371 systemd[1]: Started systemd-timesyncd.service. Dec 13 14:06:06.205266 systemd-resolved[1468]: Positive Trust Anchors: Dec 13 14:06:06.205280 systemd-resolved[1468]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 14:06:06.205310 systemd-resolved[1468]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Dec 13 14:06:06.206540 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 14:06:06.206714 systemd[1]: Finished modprobe@drm.service. Dec 13 14:06:06.212319 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 14:06:06.212483 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 14:06:06.217831 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 14:06:06.218005 systemd[1]: Finished modprobe@loop.service. Dec 13 14:06:06.219539 systemd-resolved[1468]: Using system hostname 'ci-3510.3.6-a-18113e8891'. Dec 13 14:06:06.223499 systemd[1]: Started systemd-resolved.service. Dec 13 14:06:06.229125 systemd[1]: Reached target network.target. Dec 13 14:06:06.233624 systemd[1]: Reached target nss-lookup.target. Dec 13 14:06:06.238573 systemd[1]: Reached target time-set.target. Dec 13 14:06:06.243421 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 14:06:06.243526 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Dec 13 14:06:06.245264 systemd[1]: Finished ensure-sysext.service. Dec 13 14:06:06.448270 systemd-networkd[1274]: eth0: Gained IPv6LL Dec 13 14:06:06.450268 systemd[1]: Finished systemd-networkd-wait-online.service. Dec 13 14:06:06.456090 systemd[1]: Reached target network-online.target. Dec 13 14:06:14.347697 ldconfig[1337]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Dec 13 14:06:14.367207 systemd[1]: Finished ldconfig.service. Dec 13 14:06:14.373631 systemd[1]: Starting systemd-update-done.service... Dec 13 14:06:14.433389 systemd[1]: Finished systemd-update-done.service. Dec 13 14:06:14.439422 systemd[1]: Reached target sysinit.target. Dec 13 14:06:14.443978 systemd[1]: Started motdgen.path. Dec 13 14:06:14.447886 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Dec 13 14:06:14.454195 systemd[1]: Started logrotate.timer. Dec 13 14:06:14.458229 systemd[1]: Started mdadm.timer. Dec 13 14:06:14.461900 systemd[1]: Started systemd-tmpfiles-clean.timer. Dec 13 14:06:14.466611 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Dec 13 14:06:14.466641 systemd[1]: Reached target paths.target. Dec 13 14:06:14.470817 systemd[1]: Reached target timers.target. Dec 13 14:06:14.477136 systemd[1]: Listening on dbus.socket. Dec 13 14:06:14.482398 systemd[1]: Starting docker.socket... Dec 13 14:06:14.487061 systemd[1]: Listening on sshd.socket. Dec 13 14:06:14.491210 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 14:06:14.491614 systemd[1]: Listening on docker.socket. Dec 13 14:06:14.495896 systemd[1]: Reached target sockets.target. Dec 13 14:06:14.500166 systemd[1]: Reached target basic.target. Dec 13 14:06:14.504578 systemd[1]: System is tainted: cgroupsv1 Dec 13 14:06:14.504633 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Dec 13 14:06:14.504654 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Dec 13 14:06:14.505815 systemd[1]: Starting containerd.service... Dec 13 14:06:14.511041 systemd[1]: Starting dbus.service... Dec 13 14:06:14.515602 systemd[1]: Starting enable-oem-cloudinit.service... Dec 13 14:06:14.521049 systemd[1]: Starting extend-filesystems.service... Dec 13 14:06:14.525665 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Dec 13 14:06:14.527124 systemd[1]: Starting kubelet.service... Dec 13 14:06:14.531880 systemd[1]: Starting motdgen.service... Dec 13 14:06:14.536622 systemd[1]: Started nvidia.service. Dec 13 14:06:14.541728 systemd[1]: Starting prepare-helm.service... Dec 13 14:06:14.546719 systemd[1]: Starting ssh-key-proc-cmdline.service... Dec 13 14:06:14.552049 systemd[1]: Starting sshd-keygen.service... Dec 13 14:06:14.557610 systemd[1]: Starting systemd-logind.service... Dec 13 14:06:14.561682 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 14:06:14.561760 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Dec 13 14:06:14.562881 systemd[1]: Starting update-engine.service... Dec 13 14:06:14.568990 systemd[1]: Starting update-ssh-keys-after-ignition.service... Dec 13 14:06:14.576792 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Dec 13 14:06:14.577045 systemd[1]: Finished ssh-key-proc-cmdline.service. Dec 13 14:06:14.653590 jq[1526]: false Dec 13 14:06:14.654039 jq[1547]: true Dec 13 14:06:14.660190 extend-filesystems[1527]: Found loop1 Dec 13 14:06:14.665438 extend-filesystems[1527]: Found sda Dec 13 14:06:14.665438 extend-filesystems[1527]: Found sda1 Dec 13 14:06:14.665438 extend-filesystems[1527]: Found sda2 Dec 13 14:06:14.665438 extend-filesystems[1527]: Found sda3 Dec 13 14:06:14.665438 extend-filesystems[1527]: Found usr Dec 13 14:06:14.665438 extend-filesystems[1527]: Found sda4 Dec 13 14:06:14.665438 extend-filesystems[1527]: Found sda6 Dec 13 14:06:14.665438 extend-filesystems[1527]: Found sda7 Dec 13 14:06:14.665438 extend-filesystems[1527]: Found sda9 Dec 13 14:06:14.665438 extend-filesystems[1527]: Checking size of /dev/sda9 Dec 13 14:06:14.689510 systemd[1]: motdgen.service: Deactivated successfully. Dec 13 14:06:14.689785 systemd[1]: Finished motdgen.service. Dec 13 14:06:14.709159 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Dec 13 14:06:14.709418 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Dec 13 14:06:14.735867 systemd-logind[1542]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Dec 13 14:06:14.742247 systemd-logind[1542]: New seat seat0. Dec 13 14:06:14.758599 jq[1565]: true Dec 13 14:06:14.787175 env[1556]: time="2024-12-13T14:06:14.787105860Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Dec 13 14:06:14.845358 env[1556]: time="2024-12-13T14:06:14.845311940Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Dec 13 14:06:14.845637 env[1556]: time="2024-12-13T14:06:14.845618740Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Dec 13 14:06:14.849216 env[1556]: time="2024-12-13T14:06:14.849183100Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.173-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Dec 13 14:06:14.849344 env[1556]: time="2024-12-13T14:06:14.849327540Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Dec 13 14:06:14.849674 env[1556]: time="2024-12-13T14:06:14.849650100Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 14:06:14.849758 env[1556]: time="2024-12-13T14:06:14.849743420Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Dec 13 14:06:14.849820 env[1556]: time="2024-12-13T14:06:14.849805300Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Dec 13 14:06:14.849873 env[1556]: time="2024-12-13T14:06:14.849859700Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Dec 13 14:06:14.850007 env[1556]: time="2024-12-13T14:06:14.849991460Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Dec 13 14:06:14.850348 env[1556]: time="2024-12-13T14:06:14.850327900Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Dec 13 14:06:14.850588 env[1556]: time="2024-12-13T14:06:14.850568020Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 14:06:14.850654 env[1556]: time="2024-12-13T14:06:14.850641060Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Dec 13 14:06:14.850760 env[1556]: time="2024-12-13T14:06:14.850743980Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Dec 13 14:06:14.850821 env[1556]: time="2024-12-13T14:06:14.850808620Z" level=info msg="metadata content store policy set" policy=shared Dec 13 14:06:14.865498 tar[1550]: linux-arm64/helm Dec 13 14:06:14.868291 env[1556]: time="2024-12-13T14:06:14.868255300Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Dec 13 14:06:14.868410 env[1556]: time="2024-12-13T14:06:14.868394260Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Dec 13 14:06:14.868472 env[1556]: time="2024-12-13T14:06:14.868459220Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Dec 13 14:06:14.868563 env[1556]: time="2024-12-13T14:06:14.868550340Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Dec 13 14:06:14.868682 env[1556]: time="2024-12-13T14:06:14.868669060Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Dec 13 14:06:14.868754 env[1556]: time="2024-12-13T14:06:14.868740540Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Dec 13 14:06:14.868811 env[1556]: time="2024-12-13T14:06:14.868797060Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Dec 13 14:06:14.869219 env[1556]: time="2024-12-13T14:06:14.869197220Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Dec 13 14:06:14.869314 env[1556]: time="2024-12-13T14:06:14.869298220Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Dec 13 14:06:14.869376 env[1556]: time="2024-12-13T14:06:14.869363020Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Dec 13 14:06:14.869439 env[1556]: time="2024-12-13T14:06:14.869426500Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Dec 13 14:06:14.869499 env[1556]: time="2024-12-13T14:06:14.869486380Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Dec 13 14:06:14.869678 env[1556]: time="2024-12-13T14:06:14.869661420Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Dec 13 14:06:14.869817 env[1556]: time="2024-12-13T14:06:14.869802060Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Dec 13 14:06:14.870218 env[1556]: time="2024-12-13T14:06:14.870200140Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Dec 13 14:06:14.870447 env[1556]: time="2024-12-13T14:06:14.870426500Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Dec 13 14:06:14.871095 extend-filesystems[1527]: Old size kept for /dev/sda9 Dec 13 14:06:14.871095 extend-filesystems[1527]: Found sr0 Dec 13 14:06:14.878695 systemd[1]: extend-filesystems.service: Deactivated successfully. Dec 13 14:06:14.916346 env[1556]: time="2024-12-13T14:06:14.881142580Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Dec 13 14:06:14.916346 env[1556]: time="2024-12-13T14:06:14.881226580Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Dec 13 14:06:14.916346 env[1556]: time="2024-12-13T14:06:14.881240540Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Dec 13 14:06:14.916346 env[1556]: time="2024-12-13T14:06:14.881253340Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Dec 13 14:06:14.916346 env[1556]: time="2024-12-13T14:06:14.881266300Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Dec 13 14:06:14.916346 env[1556]: time="2024-12-13T14:06:14.881338700Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Dec 13 14:06:14.916346 env[1556]: time="2024-12-13T14:06:14.881352860Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Dec 13 14:06:14.916346 env[1556]: time="2024-12-13T14:06:14.881366540Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Dec 13 14:06:14.916346 env[1556]: time="2024-12-13T14:06:14.881378500Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Dec 13 14:06:14.916346 env[1556]: time="2024-12-13T14:06:14.881407940Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Dec 13 14:06:14.916346 env[1556]: time="2024-12-13T14:06:14.881576740Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Dec 13 14:06:14.916346 env[1556]: time="2024-12-13T14:06:14.881593860Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Dec 13 14:06:14.916346 env[1556]: time="2024-12-13T14:06:14.881606420Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Dec 13 14:06:14.916346 env[1556]: time="2024-12-13T14:06:14.881618740Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Dec 13 14:06:14.878962 systemd[1]: Finished extend-filesystems.service. Dec 13 14:06:14.916736 env[1556]: time="2024-12-13T14:06:14.881634180Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Dec 13 14:06:14.916736 env[1556]: time="2024-12-13T14:06:14.881645020Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Dec 13 14:06:14.916736 env[1556]: time="2024-12-13T14:06:14.881661180Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Dec 13 14:06:14.916736 env[1556]: time="2024-12-13T14:06:14.881695180Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Dec 13 14:06:14.904778 systemd[1]: Started containerd.service. Dec 13 14:06:14.916892 env[1556]: time="2024-12-13T14:06:14.881890340Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Dec 13 14:06:14.916892 env[1556]: time="2024-12-13T14:06:14.881949420Z" level=info msg="Connect containerd service" Dec 13 14:06:14.916892 env[1556]: time="2024-12-13T14:06:14.881980260Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Dec 13 14:06:14.916892 env[1556]: time="2024-12-13T14:06:14.882536700Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 13 14:06:14.916892 env[1556]: time="2024-12-13T14:06:14.882774300Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Dec 13 14:06:14.916892 env[1556]: time="2024-12-13T14:06:14.882809820Z" level=info msg=serving... address=/run/containerd/containerd.sock Dec 13 14:06:14.916892 env[1556]: time="2024-12-13T14:06:14.882849780Z" level=info msg="containerd successfully booted in 0.101428s" Dec 13 14:06:14.916892 env[1556]: time="2024-12-13T14:06:14.899794060Z" level=info msg="Start subscribing containerd event" Dec 13 14:06:14.916892 env[1556]: time="2024-12-13T14:06:14.899861500Z" level=info msg="Start recovering state" Dec 13 14:06:14.916892 env[1556]: time="2024-12-13T14:06:14.899935220Z" level=info msg="Start event monitor" Dec 13 14:06:14.916892 env[1556]: time="2024-12-13T14:06:14.899957300Z" level=info msg="Start snapshots syncer" Dec 13 14:06:14.916892 env[1556]: time="2024-12-13T14:06:14.899967780Z" level=info msg="Start cni network conf syncer for default" Dec 13 14:06:14.916892 env[1556]: time="2024-12-13T14:06:14.899976660Z" level=info msg="Start streaming server" Dec 13 14:06:14.979799 bash[1599]: Updated "/home/core/.ssh/authorized_keys" Dec 13 14:06:14.986043 systemd[1]: Finished update-ssh-keys-after-ignition.service. Dec 13 14:06:15.004007 dbus-daemon[1525]: [system] SELinux support is enabled Dec 13 14:06:15.004237 systemd[1]: Started dbus.service. Dec 13 14:06:15.009917 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Dec 13 14:06:15.009948 systemd[1]: Reached target system-config.target. Dec 13 14:06:15.018375 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Dec 13 14:06:15.018406 systemd[1]: Reached target user-config.target. Dec 13 14:06:15.028147 systemd[1]: Started systemd-logind.service. Dec 13 14:06:15.032703 dbus-daemon[1525]: [system] Successfully activated service 'org.freedesktop.systemd1' Dec 13 14:06:15.154183 systemd[1]: nvidia.service: Deactivated successfully. Dec 13 14:06:15.448656 tar[1550]: linux-arm64/LICENSE Dec 13 14:06:15.448875 tar[1550]: linux-arm64/README.md Dec 13 14:06:15.456565 systemd[1]: Finished prepare-helm.service. Dec 13 14:06:15.555463 systemd[1]: Started kubelet.service. Dec 13 14:06:15.898653 update_engine[1544]: I1213 14:06:15.895776 1544 main.cc:92] Flatcar Update Engine starting Dec 13 14:06:15.956179 systemd[1]: Started update-engine.service. Dec 13 14:06:15.956463 update_engine[1544]: I1213 14:06:15.956202 1544 update_check_scheduler.cc:74] Next update check in 6m6s Dec 13 14:06:15.962478 systemd[1]: Started locksmithd.service. Dec 13 14:06:16.047413 kubelet[1645]: E1213 14:06:16.047327 1645 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 14:06:16.049378 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 14:06:16.049523 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 14:06:16.521930 systemd-timesyncd[1469]: Timed out waiting for reply from 23.186.168.2:123 (0.flatcar.pool.ntp.org). Dec 13 14:06:16.571896 systemd-timesyncd[1469]: Contacted time server 142.171.161.79:123 (0.flatcar.pool.ntp.org). Dec 13 14:06:16.572356 systemd-timesyncd[1469]: Initial clock synchronization to Fri 2024-12-13 14:06:16.580716 UTC. Dec 13 14:06:16.738971 sshd_keygen[1543]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Dec 13 14:06:16.757300 systemd[1]: Finished sshd-keygen.service. Dec 13 14:06:16.763438 systemd[1]: Starting issuegen.service... Dec 13 14:06:16.768336 systemd[1]: Started waagent.service. Dec 13 14:06:16.772983 systemd[1]: issuegen.service: Deactivated successfully. Dec 13 14:06:16.773279 systemd[1]: Finished issuegen.service. Dec 13 14:06:16.778999 systemd[1]: Starting systemd-user-sessions.service... Dec 13 14:06:16.904348 systemd[1]: Finished systemd-user-sessions.service. Dec 13 14:06:16.911282 systemd[1]: Started getty@tty1.service. Dec 13 14:06:16.917276 systemd[1]: Started serial-getty@ttyAMA0.service. Dec 13 14:06:16.922615 systemd[1]: Reached target getty.target. Dec 13 14:06:16.927398 systemd[1]: Reached target multi-user.target. Dec 13 14:06:16.933489 systemd[1]: Starting systemd-update-utmp-runlevel.service... Dec 13 14:06:16.941627 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Dec 13 14:06:16.941868 systemd[1]: Finished systemd-update-utmp-runlevel.service. Dec 13 14:06:16.947933 systemd[1]: Startup finished in 19.021s (kernel) + 36.565s (userspace) = 55.586s. Dec 13 14:06:18.355715 login[1676]: pam_lastlog(login:session): file /var/log/lastlog is locked/write Dec 13 14:06:18.399360 login[1675]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Dec 13 14:06:18.418866 systemd[1]: Created slice user-500.slice. Dec 13 14:06:18.420004 systemd[1]: Starting user-runtime-dir@500.service... Dec 13 14:06:18.423866 systemd-logind[1542]: New session 1 of user core. Dec 13 14:06:18.462317 systemd[1]: Finished user-runtime-dir@500.service. Dec 13 14:06:18.463694 systemd[1]: Starting user@500.service... Dec 13 14:06:18.468221 locksmithd[1653]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Dec 13 14:06:18.476435 (systemd)[1682]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:06:18.590250 systemd[1682]: Queued start job for default target default.target. Dec 13 14:06:18.590490 systemd[1682]: Reached target paths.target. Dec 13 14:06:18.590505 systemd[1682]: Reached target sockets.target. Dec 13 14:06:18.590516 systemd[1682]: Reached target timers.target. Dec 13 14:06:18.590526 systemd[1682]: Reached target basic.target. Dec 13 14:06:18.590642 systemd[1]: Started user@500.service. Dec 13 14:06:18.591486 systemd[1]: Started session-1.scope. Dec 13 14:06:18.591729 systemd[1682]: Reached target default.target. Dec 13 14:06:18.591898 systemd[1682]: Startup finished in 109ms. Dec 13 14:06:19.357236 login[1676]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Dec 13 14:06:19.361222 systemd-logind[1542]: New session 2 of user core. Dec 13 14:06:19.361608 systemd[1]: Started session-2.scope. Dec 13 14:06:23.816161 waagent[1670]: 2024-12-13T14:06:23.816029Z INFO Daemon Daemon Azure Linux Agent Version:2.6.0.2 Dec 13 14:06:23.822853 waagent[1670]: 2024-12-13T14:06:23.822760Z INFO Daemon Daemon OS: flatcar 3510.3.6 Dec 13 14:06:23.827485 waagent[1670]: 2024-12-13T14:06:23.827417Z INFO Daemon Daemon Python: 3.9.16 Dec 13 14:06:23.832436 waagent[1670]: 2024-12-13T14:06:23.832361Z INFO Daemon Daemon Run daemon Dec 13 14:06:23.836930 waagent[1670]: 2024-12-13T14:06:23.836866Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='3510.3.6' Dec 13 14:06:23.854063 waagent[1670]: 2024-12-13T14:06:23.853918Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 1. Dec 13 14:06:23.869703 waagent[1670]: 2024-12-13T14:06:23.869561Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Dec 13 14:06:23.879414 waagent[1670]: 2024-12-13T14:06:23.879342Z INFO Daemon Daemon cloud-init is enabled: False Dec 13 14:06:23.884622 waagent[1670]: 2024-12-13T14:06:23.884558Z INFO Daemon Daemon Using waagent for provisioning Dec 13 14:06:23.890377 waagent[1670]: 2024-12-13T14:06:23.890311Z INFO Daemon Daemon Activate resource disk Dec 13 14:06:23.895099 waagent[1670]: 2024-12-13T14:06:23.895028Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Dec 13 14:06:23.909745 waagent[1670]: 2024-12-13T14:06:23.909664Z INFO Daemon Daemon Found device: None Dec 13 14:06:23.914823 waagent[1670]: 2024-12-13T14:06:23.914750Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Dec 13 14:06:23.923675 waagent[1670]: 2024-12-13T14:06:23.923604Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Dec 13 14:06:23.936396 waagent[1670]: 2024-12-13T14:06:23.936330Z INFO Daemon Daemon Clean protocol and wireserver endpoint Dec 13 14:06:23.942220 waagent[1670]: 2024-12-13T14:06:23.942147Z INFO Daemon Daemon Running default provisioning handler Dec 13 14:06:23.955281 waagent[1670]: 2024-12-13T14:06:23.955162Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 1. Dec 13 14:06:23.970478 waagent[1670]: 2024-12-13T14:06:23.970349Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Dec 13 14:06:23.980794 waagent[1670]: 2024-12-13T14:06:23.980708Z INFO Daemon Daemon cloud-init is enabled: False Dec 13 14:06:23.986099 waagent[1670]: 2024-12-13T14:06:23.985998Z INFO Daemon Daemon Copying ovf-env.xml Dec 13 14:06:24.097846 waagent[1670]: 2024-12-13T14:06:24.097639Z INFO Daemon Daemon Successfully mounted dvd Dec 13 14:06:24.148571 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Dec 13 14:06:24.218946 waagent[1670]: 2024-12-13T14:06:24.218803Z INFO Daemon Daemon Detect protocol endpoint Dec 13 14:06:24.224091 waagent[1670]: 2024-12-13T14:06:24.223993Z INFO Daemon Daemon Clean protocol and wireserver endpoint Dec 13 14:06:24.231241 waagent[1670]: 2024-12-13T14:06:24.231160Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Dec 13 14:06:24.238160 waagent[1670]: 2024-12-13T14:06:24.238085Z INFO Daemon Daemon Test for route to 168.63.129.16 Dec 13 14:06:24.243739 waagent[1670]: 2024-12-13T14:06:24.243670Z INFO Daemon Daemon Route to 168.63.129.16 exists Dec 13 14:06:24.249557 waagent[1670]: 2024-12-13T14:06:24.249493Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Dec 13 14:06:24.483482 waagent[1670]: 2024-12-13T14:06:24.483355Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Dec 13 14:06:24.490936 waagent[1670]: 2024-12-13T14:06:24.490888Z INFO Daemon Daemon Wire protocol version:2012-11-30 Dec 13 14:06:24.496874 waagent[1670]: 2024-12-13T14:06:24.496793Z INFO Daemon Daemon Server preferred version:2015-04-05 Dec 13 14:06:26.272344 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Dec 13 14:06:26.272504 systemd[1]: Stopped kubelet.service. Dec 13 14:06:26.273880 systemd[1]: Starting kubelet.service... Dec 13 14:06:26.579899 waagent[1670]: 2024-12-13T14:06:26.579698Z INFO Daemon Daemon Initializing goal state during protocol detection Dec 13 14:06:26.599972 waagent[1670]: 2024-12-13T14:06:26.599874Z INFO Daemon Daemon Forcing an update of the goal state.. Dec 13 14:06:26.605940 waagent[1670]: 2024-12-13T14:06:26.605844Z INFO Daemon Daemon Fetching goal state [incarnation 1] Dec 13 14:06:26.626088 systemd[1]: Started kubelet.service. Dec 13 14:06:26.685967 kubelet[1724]: E1213 14:06:26.685915 1724 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 14:06:26.688519 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 14:06:26.688661 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 14:06:29.021493 waagent[1670]: 2024-12-13T14:06:29.021350Z INFO Daemon Daemon Found private key matching thumbprint D50069850D28EB9CCA7E4D4C35A58ACE7923BE8E Dec 13 14:06:29.029889 waagent[1670]: 2024-12-13T14:06:29.029815Z INFO Daemon Daemon Certificate with thumbprint 4119D68762CF6E0262E4728F9ADE3245185ACB98 has no matching private key. Dec 13 14:06:29.039534 waagent[1670]: 2024-12-13T14:06:29.039463Z INFO Daemon Daemon Fetch goal state completed Dec 13 14:06:29.099213 waagent[1670]: 2024-12-13T14:06:29.099155Z INFO Daemon Daemon Fetched new vmSettings [correlation ID: 8e78ea80-7777-4b5d-8a66-1476d34e0cc9 New eTag: 7171682191147587749] Dec 13 14:06:29.110762 waagent[1670]: 2024-12-13T14:06:29.110681Z INFO Daemon Daemon Status Blob type 'None' is not valid, assuming BlockBlob Dec 13 14:06:29.127130 waagent[1670]: 2024-12-13T14:06:29.127054Z INFO Daemon Daemon Starting provisioning Dec 13 14:06:29.132533 waagent[1670]: 2024-12-13T14:06:29.132463Z INFO Daemon Daemon Handle ovf-env.xml. Dec 13 14:06:29.137882 waagent[1670]: 2024-12-13T14:06:29.137819Z INFO Daemon Daemon Set hostname [ci-3510.3.6-a-18113e8891] Dec 13 14:06:29.155351 waagent[1670]: 2024-12-13T14:06:29.155224Z INFO Daemon Daemon Publish hostname [ci-3510.3.6-a-18113e8891] Dec 13 14:06:29.162280 waagent[1670]: 2024-12-13T14:06:29.162197Z INFO Daemon Daemon Examine /proc/net/route for primary interface Dec 13 14:06:29.168978 waagent[1670]: 2024-12-13T14:06:29.168910Z INFO Daemon Daemon Primary interface is [eth0] Dec 13 14:06:29.185583 systemd[1]: systemd-networkd-wait-online.service: Deactivated successfully. Dec 13 14:06:29.185807 systemd[1]: Stopped systemd-networkd-wait-online.service. Dec 13 14:06:29.185859 systemd[1]: Stopping systemd-networkd-wait-online.service... Dec 13 14:06:29.186033 systemd[1]: Stopping systemd-networkd.service... Dec 13 14:06:29.191143 systemd-networkd[1274]: eth0: DHCPv6 lease lost Dec 13 14:06:29.192516 systemd[1]: systemd-networkd.service: Deactivated successfully. Dec 13 14:06:29.192759 systemd[1]: Stopped systemd-networkd.service. Dec 13 14:06:29.194631 systemd[1]: Starting systemd-networkd.service... Dec 13 14:06:29.227986 systemd-networkd[1743]: enP33971s1: Link UP Dec 13 14:06:29.228310 systemd-networkd[1743]: enP33971s1: Gained carrier Dec 13 14:06:29.229255 systemd-networkd[1743]: eth0: Link UP Dec 13 14:06:29.229323 systemd-networkd[1743]: eth0: Gained carrier Dec 13 14:06:29.229698 systemd-networkd[1743]: lo: Link UP Dec 13 14:06:29.229762 systemd-networkd[1743]: lo: Gained carrier Dec 13 14:06:29.230039 systemd-networkd[1743]: eth0: Gained IPv6LL Dec 13 14:06:29.230763 systemd-networkd[1743]: Enumeration completed Dec 13 14:06:29.231027 systemd[1]: Started systemd-networkd.service. Dec 13 14:06:29.232855 systemd[1]: Starting systemd-networkd-wait-online.service... Dec 13 14:06:29.238655 waagent[1670]: 2024-12-13T14:06:29.235590Z INFO Daemon Daemon Create user account if not exists Dec 13 14:06:29.241882 waagent[1670]: 2024-12-13T14:06:29.241795Z INFO Daemon Daemon User core already exists, skip useradd Dec 13 14:06:29.242718 systemd-networkd[1743]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 14:06:29.248349 waagent[1670]: 2024-12-13T14:06:29.248237Z INFO Daemon Daemon Configure sudoer Dec 13 14:06:29.253941 waagent[1670]: 2024-12-13T14:06:29.253630Z INFO Daemon Daemon Configure sshd Dec 13 14:06:29.258614 waagent[1670]: 2024-12-13T14:06:29.258540Z INFO Daemon Daemon Deploy ssh public key. Dec 13 14:06:29.269191 systemd-networkd[1743]: eth0: DHCPv4 address 10.200.20.41/24, gateway 10.200.20.1 acquired from 168.63.129.16 Dec 13 14:06:29.280199 systemd[1]: Finished systemd-networkd-wait-online.service. Dec 13 14:06:30.366782 waagent[1670]: 2024-12-13T14:06:30.366701Z INFO Daemon Daemon Provisioning complete Dec 13 14:06:30.388526 waagent[1670]: 2024-12-13T14:06:30.388447Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Dec 13 14:06:30.395203 waagent[1670]: 2024-12-13T14:06:30.395124Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Dec 13 14:06:30.406385 waagent[1670]: 2024-12-13T14:06:30.406306Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.6.0.2 is the most current agent Dec 13 14:06:30.712370 waagent[1753]: 2024-12-13T14:06:30.712208Z INFO ExtHandler ExtHandler Agent WALinuxAgent-2.6.0.2 is running as the goal state agent Dec 13 14:06:30.713608 waagent[1753]: 2024-12-13T14:06:30.713534Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Dec 13 14:06:30.713893 waagent[1753]: 2024-12-13T14:06:30.713841Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Dec 13 14:06:30.727057 waagent[1753]: 2024-12-13T14:06:30.726962Z INFO ExtHandler ExtHandler Forcing an update of the goal state.. Dec 13 14:06:30.727438 waagent[1753]: 2024-12-13T14:06:30.727383Z INFO ExtHandler ExtHandler Fetching goal state [incarnation 1] Dec 13 14:06:30.798797 waagent[1753]: 2024-12-13T14:06:30.798656Z INFO ExtHandler ExtHandler Found private key matching thumbprint D50069850D28EB9CCA7E4D4C35A58ACE7923BE8E Dec 13 14:06:30.799256 waagent[1753]: 2024-12-13T14:06:30.799195Z INFO ExtHandler ExtHandler Certificate with thumbprint 4119D68762CF6E0262E4728F9ADE3245185ACB98 has no matching private key. Dec 13 14:06:30.799642 waagent[1753]: 2024-12-13T14:06:30.799545Z INFO ExtHandler ExtHandler Fetch goal state completed Dec 13 14:06:30.813807 waagent[1753]: 2024-12-13T14:06:30.813748Z INFO ExtHandler ExtHandler Fetched new vmSettings [correlation ID: bdf5f84a-86a0-44b8-bda8-89b40656d015 New eTag: 7171682191147587749] Dec 13 14:06:30.814559 waagent[1753]: 2024-12-13T14:06:30.814499Z INFO ExtHandler ExtHandler Status Blob type 'None' is not valid, assuming BlockBlob Dec 13 14:06:30.857987 waagent[1753]: 2024-12-13T14:06:30.857845Z INFO ExtHandler ExtHandler Distro: flatcar-3510.3.6; OSUtil: CoreOSUtil; AgentService: waagent; Python: 3.9.16; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.20.1; Dec 13 14:06:30.880666 waagent[1753]: 2024-12-13T14:06:30.873466Z INFO ExtHandler ExtHandler WALinuxAgent-2.6.0.2 running as process 1753 Dec 13 14:06:30.880666 waagent[1753]: 2024-12-13T14:06:30.877612Z INFO ExtHandler ExtHandler Cgroup monitoring is not supported on ['flatcar', '3510.3.6', '', 'Flatcar Container Linux by Kinvolk'] Dec 13 14:06:30.880666 waagent[1753]: 2024-12-13T14:06:30.878996Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Dec 13 14:06:30.912714 waagent[1753]: 2024-12-13T14:06:30.912655Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Dec 13 14:06:30.913337 waagent[1753]: 2024-12-13T14:06:30.913279Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Dec 13 14:06:30.922376 waagent[1753]: 2024-12-13T14:06:30.922321Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Dec 13 14:06:30.923091 waagent[1753]: 2024-12-13T14:06:30.923014Z ERROR ExtHandler ExtHandler Unable to setup the persistent firewall rules: [Errno 30] Read-only file system: '/lib/systemd/system/waagent-network-setup.service' Dec 13 14:06:30.924392 waagent[1753]: 2024-12-13T14:06:30.924327Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: configuration enabled [False], cgroups enabled [False], python supported: [True] Dec 13 14:06:30.925942 waagent[1753]: 2024-12-13T14:06:30.925872Z INFO ExtHandler ExtHandler Starting env monitor service. Dec 13 14:06:30.926274 waagent[1753]: 2024-12-13T14:06:30.926195Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Dec 13 14:06:30.926468 waagent[1753]: 2024-12-13T14:06:30.926414Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Dec 13 14:06:30.927465 waagent[1753]: 2024-12-13T14:06:30.927397Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Dec 13 14:06:30.927792 waagent[1753]: 2024-12-13T14:06:30.927727Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Dec 13 14:06:30.927792 waagent[1753]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Dec 13 14:06:30.927792 waagent[1753]: eth0 00000000 0114C80A 0003 0 0 1024 00000000 0 0 0 Dec 13 14:06:30.927792 waagent[1753]: eth0 0014C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Dec 13 14:06:30.927792 waagent[1753]: eth0 0114C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Dec 13 14:06:30.927792 waagent[1753]: eth0 10813FA8 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Dec 13 14:06:30.927792 waagent[1753]: eth0 FEA9FEA9 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Dec 13 14:06:30.930874 waagent[1753]: 2024-12-13T14:06:30.930704Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Dec 13 14:06:30.931741 waagent[1753]: 2024-12-13T14:06:30.931663Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Dec 13 14:06:30.931919 waagent[1753]: 2024-12-13T14:06:30.931865Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Dec 13 14:06:30.932530 waagent[1753]: 2024-12-13T14:06:30.932459Z INFO EnvHandler ExtHandler Configure routes Dec 13 14:06:30.932700 waagent[1753]: 2024-12-13T14:06:30.932647Z INFO EnvHandler ExtHandler Gateway:None Dec 13 14:06:30.932822 waagent[1753]: 2024-12-13T14:06:30.932777Z INFO EnvHandler ExtHandler Routes:None Dec 13 14:06:30.933845 waagent[1753]: 2024-12-13T14:06:30.933775Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Dec 13 14:06:30.934036 waagent[1753]: 2024-12-13T14:06:30.933956Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Dec 13 14:06:30.934798 waagent[1753]: 2024-12-13T14:06:30.934724Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Dec 13 14:06:30.934975 waagent[1753]: 2024-12-13T14:06:30.934904Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Dec 13 14:06:30.935241 waagent[1753]: 2024-12-13T14:06:30.935182Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Dec 13 14:06:30.948671 waagent[1753]: 2024-12-13T14:06:30.948593Z INFO ExtHandler ExtHandler Checking for agent updates (family: Prod) Dec 13 14:06:30.950826 waagent[1753]: 2024-12-13T14:06:30.950777Z WARNING ExtHandler ExtHandler Fetch failed: [HttpError] HTTPS is unavailable and required Dec 13 14:06:30.951917 waagent[1753]: 2024-12-13T14:06:30.951859Z INFO ExtHandler ExtHandler [PERIODIC] Request failed using the direct channel. Error: 'NoneType' object has no attribute 'getheaders' Dec 13 14:06:30.962045 waagent[1753]: 2024-12-13T14:06:30.961955Z ERROR EnvHandler ExtHandler Failed to get the PID of the DHCP client: invalid literal for int() with base 10: 'MainPID=1743' Dec 13 14:06:30.974924 waagent[1753]: 2024-12-13T14:06:30.974729Z INFO MonitorHandler ExtHandler Network interfaces: Dec 13 14:06:30.974924 waagent[1753]: Executing ['ip', '-a', '-o', 'link']: Dec 13 14:06:30.974924 waagent[1753]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Dec 13 14:06:30.974924 waagent[1753]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 00:22:48:b8:59:42 brd ff:ff:ff:ff:ff:ff Dec 13 14:06:30.974924 waagent[1753]: 3: enP33971s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 00:22:48:b8:59:42 brd ff:ff:ff:ff:ff:ff\ altname enP33971p0s2 Dec 13 14:06:30.974924 waagent[1753]: Executing ['ip', '-4', '-a', '-o', 'address']: Dec 13 14:06:30.974924 waagent[1753]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Dec 13 14:06:30.974924 waagent[1753]: 2: eth0 inet 10.200.20.41/24 metric 1024 brd 10.200.20.255 scope global eth0\ valid_lft forever preferred_lft forever Dec 13 14:06:30.974924 waagent[1753]: Executing ['ip', '-6', '-a', '-o', 'address']: Dec 13 14:06:30.974924 waagent[1753]: 1: lo inet6 ::1/128 scope host \ valid_lft forever preferred_lft forever Dec 13 14:06:30.974924 waagent[1753]: 2: eth0 inet6 fe80::222:48ff:feb8:5942/64 scope link \ valid_lft forever preferred_lft forever Dec 13 14:06:31.003395 waagent[1753]: 2024-12-13T14:06:31.003327Z INFO ExtHandler ExtHandler Default channel changed to HostGA channel. Dec 13 14:06:31.111425 waagent[1753]: 2024-12-13T14:06:31.111283Z INFO EnvHandler ExtHandler Successfully added Azure fabric firewall rules Dec 13 14:06:31.116174 waagent[1753]: 2024-12-13T14:06:31.116019Z INFO EnvHandler ExtHandler Firewall rules: Dec 13 14:06:31.116174 waagent[1753]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Dec 13 14:06:31.116174 waagent[1753]: pkts bytes target prot opt in out source destination Dec 13 14:06:31.116174 waagent[1753]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Dec 13 14:06:31.116174 waagent[1753]: pkts bytes target prot opt in out source destination Dec 13 14:06:31.116174 waagent[1753]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Dec 13 14:06:31.116174 waagent[1753]: pkts bytes target prot opt in out source destination Dec 13 14:06:31.116174 waagent[1753]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Dec 13 14:06:31.116174 waagent[1753]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Dec 13 14:06:31.118277 waagent[1753]: 2024-12-13T14:06:31.118211Z INFO EnvHandler ExtHandler Set block dev timeout: sda with timeout: 300 Dec 13 14:06:31.332645 waagent[1753]: 2024-12-13T14:06:31.332579Z INFO ExtHandler ExtHandler Agent WALinuxAgent-2.6.0.2 discovered update WALinuxAgent-2.12.0.2 -- exiting Dec 13 14:06:31.410009 waagent[1670]: 2024-12-13T14:06:31.409865Z INFO Daemon Daemon Agent WALinuxAgent-2.6.0.2 launched with command '/usr/share/oem/python/bin/python -u /usr/share/oem/bin/waagent -run-exthandlers' is successfully running Dec 13 14:06:31.416465 waagent[1670]: 2024-12-13T14:06:31.416407Z INFO Daemon Daemon Determined Agent WALinuxAgent-2.12.0.2 to be the latest agent Dec 13 14:06:32.637985 waagent[1792]: 2024-12-13T14:06:32.637879Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.12.0.2) Dec 13 14:06:32.641398 waagent[1792]: 2024-12-13T14:06:32.641319Z INFO ExtHandler ExtHandler OS: flatcar 3510.3.6 Dec 13 14:06:32.641550 waagent[1792]: 2024-12-13T14:06:32.641502Z INFO ExtHandler ExtHandler Python: 3.9.16 Dec 13 14:06:32.641684 waagent[1792]: 2024-12-13T14:06:32.641639Z INFO ExtHandler ExtHandler CPU Arch: aarch64 Dec 13 14:06:32.650562 waagent[1792]: 2024-12-13T14:06:32.650422Z INFO ExtHandler ExtHandler Distro: flatcar-3510.3.6; OSUtil: CoreOSUtil; AgentService: waagent; Python: 3.9.16; Arch: aarch64; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.20.1; Dec 13 14:06:32.651004 waagent[1792]: 2024-12-13T14:06:32.650945Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Dec 13 14:06:32.651190 waagent[1792]: 2024-12-13T14:06:32.651140Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Dec 13 14:06:32.665344 waagent[1792]: 2024-12-13T14:06:32.665259Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Dec 13 14:06:32.674777 waagent[1792]: 2024-12-13T14:06:32.674712Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.159 Dec 13 14:06:32.675876 waagent[1792]: 2024-12-13T14:06:32.675814Z INFO ExtHandler Dec 13 14:06:32.676028 waagent[1792]: 2024-12-13T14:06:32.675979Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: cd2f0c25-1b16-4f42-bdd3-289413f8078f eTag: 7171682191147587749 source: Fabric] Dec 13 14:06:32.676794 waagent[1792]: 2024-12-13T14:06:32.676733Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Dec 13 14:06:32.678048 waagent[1792]: 2024-12-13T14:06:32.677981Z INFO ExtHandler Dec 13 14:06:32.678210 waagent[1792]: 2024-12-13T14:06:32.678160Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Dec 13 14:06:32.690631 waagent[1792]: 2024-12-13T14:06:32.690574Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Dec 13 14:06:32.691211 waagent[1792]: 2024-12-13T14:06:32.691157Z WARNING ExtHandler ExtHandler Fetch failed: [HttpError] HTTPS is unavailable and required Dec 13 14:06:32.711068 waagent[1792]: 2024-12-13T14:06:32.711001Z INFO ExtHandler ExtHandler Default channel changed to HostGAPlugin channel. Dec 13 14:06:32.783770 waagent[1792]: 2024-12-13T14:06:32.783616Z INFO ExtHandler Downloaded certificate {'thumbprint': 'D50069850D28EB9CCA7E4D4C35A58ACE7923BE8E', 'hasPrivateKey': True} Dec 13 14:06:32.788131 waagent[1792]: 2024-12-13T14:06:32.788016Z INFO ExtHandler Downloaded certificate {'thumbprint': '4119D68762CF6E0262E4728F9ADE3245185ACB98', 'hasPrivateKey': False} Dec 13 14:06:32.789231 waagent[1792]: 2024-12-13T14:06:32.789166Z INFO ExtHandler Fetch goal state completed Dec 13 14:06:32.812451 waagent[1792]: 2024-12-13T14:06:32.812320Z INFO ExtHandler ExtHandler OpenSSL version: OpenSSL 3.0.15 3 Sep 2024 (Library: OpenSSL 3.0.15 3 Sep 2024) Dec 13 14:06:32.825715 waagent[1792]: 2024-12-13T14:06:32.825601Z INFO ExtHandler ExtHandler WALinuxAgent-2.12.0.2 running as process 1792 Dec 13 14:06:32.829166 waagent[1792]: 2024-12-13T14:06:32.829087Z INFO ExtHandler ExtHandler [CGI] Cgroup monitoring is not supported on ['flatcar', '3510.3.6', '', 'Flatcar Container Linux by Kinvolk'] Dec 13 14:06:32.830320 waagent[1792]: 2024-12-13T14:06:32.830256Z INFO ExtHandler ExtHandler [CGI] Agent will reset the quotas in case distro: ['flatcar', '3510.3.6', '', 'Flatcar Container Linux by Kinvolk'] went from supported to unsupported Dec 13 14:06:32.830653 waagent[1792]: 2024-12-13T14:06:32.830597Z INFO ExtHandler ExtHandler [CGI] Agent cgroups enabled: False Dec 13 14:06:32.832914 waagent[1792]: 2024-12-13T14:06:32.832849Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Dec 13 14:06:32.838847 waagent[1792]: 2024-12-13T14:06:32.838778Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Dec 13 14:06:32.839307 waagent[1792]: 2024-12-13T14:06:32.839249Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Dec 13 14:06:32.847473 waagent[1792]: 2024-12-13T14:06:32.847418Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Dec 13 14:06:32.848189 waagent[1792]: 2024-12-13T14:06:32.848130Z ERROR ExtHandler ExtHandler Unable to setup the persistent firewall rules: [Errno 30] Read-only file system: '/lib/systemd/system/waagent-network-setup.service' Dec 13 14:06:32.865922 waagent[1792]: 2024-12-13T14:06:32.865802Z INFO ExtHandler ExtHandler Firewall rule to allow DNS TCP request to wireserver for a non root user unavailable. Setting it now. Dec 13 14:06:32.869186 waagent[1792]: 2024-12-13T14:06:32.869063Z INFO ExtHandler ExtHandler Succesfully added firewall rule to allow non root users to do a DNS TCP request to wireserver Dec 13 14:06:32.870485 waagent[1792]: 2024-12-13T14:06:32.870421Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: 1. configuration enabled [True], 2. cgroups v1 enabled [False] OR cgroups v2 is in use and v2 resource limiting configuration enabled [False], 3. python supported: [True] Dec 13 14:06:32.872240 waagent[1792]: 2024-12-13T14:06:32.872171Z INFO ExtHandler ExtHandler Starting env monitor service. Dec 13 14:06:32.872556 waagent[1792]: 2024-12-13T14:06:32.872478Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Dec 13 14:06:32.873154 waagent[1792]: 2024-12-13T14:06:32.873043Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Dec 13 14:06:32.873771 waagent[1792]: 2024-12-13T14:06:32.873707Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Dec 13 14:06:32.874080 waagent[1792]: 2024-12-13T14:06:32.874019Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Dec 13 14:06:32.874080 waagent[1792]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Dec 13 14:06:32.874080 waagent[1792]: eth0 00000000 0114C80A 0003 0 0 1024 00000000 0 0 0 Dec 13 14:06:32.874080 waagent[1792]: eth0 0014C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Dec 13 14:06:32.874080 waagent[1792]: eth0 0114C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Dec 13 14:06:32.874080 waagent[1792]: eth0 10813FA8 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Dec 13 14:06:32.874080 waagent[1792]: eth0 FEA9FEA9 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Dec 13 14:06:32.876613 waagent[1792]: 2024-12-13T14:06:32.876508Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Dec 13 14:06:32.876872 waagent[1792]: 2024-12-13T14:06:32.876797Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Dec 13 14:06:32.877550 waagent[1792]: 2024-12-13T14:06:32.877483Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Dec 13 14:06:32.878203 waagent[1792]: 2024-12-13T14:06:32.878002Z INFO EnvHandler ExtHandler Configure routes Dec 13 14:06:32.878418 waagent[1792]: 2024-12-13T14:06:32.878346Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Dec 13 14:06:32.878581 waagent[1792]: 2024-12-13T14:06:32.878522Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Dec 13 14:06:32.881156 waagent[1792]: 2024-12-13T14:06:32.880990Z INFO EnvHandler ExtHandler Gateway:None Dec 13 14:06:32.882063 waagent[1792]: 2024-12-13T14:06:32.881995Z INFO EnvHandler ExtHandler Routes:None Dec 13 14:06:32.883399 waagent[1792]: 2024-12-13T14:06:32.883324Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Dec 13 14:06:32.885663 waagent[1792]: 2024-12-13T14:06:32.885531Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Dec 13 14:06:32.885928 waagent[1792]: 2024-12-13T14:06:32.885845Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Dec 13 14:06:32.900282 waagent[1792]: 2024-12-13T14:06:32.900114Z INFO MonitorHandler ExtHandler Network interfaces: Dec 13 14:06:32.900282 waagent[1792]: Executing ['ip', '-a', '-o', 'link']: Dec 13 14:06:32.900282 waagent[1792]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Dec 13 14:06:32.900282 waagent[1792]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 00:22:48:b8:59:42 brd ff:ff:ff:ff:ff:ff Dec 13 14:06:32.900282 waagent[1792]: 3: enP33971s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 00:22:48:b8:59:42 brd ff:ff:ff:ff:ff:ff\ altname enP33971p0s2 Dec 13 14:06:32.900282 waagent[1792]: Executing ['ip', '-4', '-a', '-o', 'address']: Dec 13 14:06:32.900282 waagent[1792]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Dec 13 14:06:32.900282 waagent[1792]: 2: eth0 inet 10.200.20.41/24 metric 1024 brd 10.200.20.255 scope global eth0\ valid_lft forever preferred_lft forever Dec 13 14:06:32.900282 waagent[1792]: Executing ['ip', '-6', '-a', '-o', 'address']: Dec 13 14:06:32.900282 waagent[1792]: 1: lo inet6 ::1/128 scope host \ valid_lft forever preferred_lft forever Dec 13 14:06:32.900282 waagent[1792]: 2: eth0 inet6 fe80::222:48ff:feb8:5942/64 scope link \ valid_lft forever preferred_lft forever Dec 13 14:06:32.912840 waagent[1792]: 2024-12-13T14:06:32.912747Z INFO ExtHandler ExtHandler Downloading agent manifest Dec 13 14:06:32.961151 waagent[1792]: 2024-12-13T14:06:32.961044Z INFO EnvHandler ExtHandler Current Firewall rules: Dec 13 14:06:32.961151 waagent[1792]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Dec 13 14:06:32.961151 waagent[1792]: pkts bytes target prot opt in out source destination Dec 13 14:06:32.961151 waagent[1792]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Dec 13 14:06:32.961151 waagent[1792]: pkts bytes target prot opt in out source destination Dec 13 14:06:32.961151 waagent[1792]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Dec 13 14:06:32.961151 waagent[1792]: pkts bytes target prot opt in out source destination Dec 13 14:06:32.961151 waagent[1792]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Dec 13 14:06:32.961151 waagent[1792]: 183 19519 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Dec 13 14:06:32.961151 waagent[1792]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Dec 13 14:06:32.973205 waagent[1792]: 2024-12-13T14:06:32.973117Z INFO ExtHandler ExtHandler Dec 13 14:06:32.973349 waagent[1792]: 2024-12-13T14:06:32.973296Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: 3e217c89-a3b4-429b-ae32-9678e0d403c8 correlation 4a68041b-2400-4c67-9248-564545522f12 created: 2024-12-13T14:04:09.774719Z] Dec 13 14:06:32.974676 waagent[1792]: 2024-12-13T14:06:32.974601Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Dec 13 14:06:32.978193 waagent[1792]: 2024-12-13T14:06:32.978112Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 5 ms] Dec 13 14:06:32.997711 waagent[1792]: 2024-12-13T14:06:32.997636Z INFO ExtHandler ExtHandler Looking for existing remote access users. Dec 13 14:06:33.017899 waagent[1792]: 2024-12-13T14:06:33.017808Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.12.0.2 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: 799FF135-48B4-4299-9D8F-18AB15090068;DroppedPackets: 0;UpdateGSErrors: 0;AutoUpdate: 1;UpdateMode: SelfUpdate;] Dec 13 14:06:36.772424 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Dec 13 14:06:36.772597 systemd[1]: Stopped kubelet.service. Dec 13 14:06:36.774044 systemd[1]: Starting kubelet.service... Dec 13 14:06:37.011727 systemd[1]: Started kubelet.service. Dec 13 14:06:37.055668 kubelet[1842]: E1213 14:06:37.055566 1842 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 14:06:37.057683 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 14:06:37.057829 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 14:06:47.272517 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Dec 13 14:06:47.272680 systemd[1]: Stopped kubelet.service. Dec 13 14:06:47.274098 systemd[1]: Starting kubelet.service... Dec 13 14:06:47.474388 systemd[1]: Started kubelet.service. Dec 13 14:06:47.514946 kubelet[1857]: E1213 14:06:47.514897 1857 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 14:06:47.517097 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 14:06:47.517236 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 14:06:52.839817 kernel: hv_balloon: Max. dynamic memory size: 4096 MB Dec 13 14:06:57.522448 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Dec 13 14:06:57.522610 systemd[1]: Stopped kubelet.service. Dec 13 14:06:57.523992 systemd[1]: Starting kubelet.service... Dec 13 14:06:57.637036 systemd[1]: Started kubelet.service. Dec 13 14:06:57.683330 kubelet[1872]: E1213 14:06:57.683287 1872 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 14:06:57.685330 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 14:06:57.685464 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 14:07:01.376154 update_engine[1544]: I1213 14:07:01.376116 1544 update_attempter.cc:509] Updating boot flags... Dec 13 14:07:07.772403 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Dec 13 14:07:07.772579 systemd[1]: Stopped kubelet.service. Dec 13 14:07:07.774049 systemd[1]: Starting kubelet.service... Dec 13 14:07:08.008538 systemd[1]: Started kubelet.service. Dec 13 14:07:08.052226 kubelet[1930]: E1213 14:07:08.052124 1930 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 14:07:08.054093 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 14:07:08.054234 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 14:07:17.416809 systemd[1]: Created slice system-sshd.slice. Dec 13 14:07:17.418038 systemd[1]: Started sshd@0-10.200.20.41:22-10.200.16.10:47044.service. Dec 13 14:07:17.888160 sshd[1937]: Accepted publickey for core from 10.200.16.10 port 47044 ssh2: RSA SHA256:xuCpWY3jYETt01AJgPfaKRWNP61F/EGSdrGlXn/pObI Dec 13 14:07:17.893338 sshd[1937]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:07:17.897657 systemd[1]: Started session-3.scope. Dec 13 14:07:17.898656 systemd-logind[1542]: New session 3 of user core. Dec 13 14:07:18.272992 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. Dec 13 14:07:18.273116 systemd[1]: Stopped kubelet.service. Dec 13 14:07:18.274535 systemd[1]: Starting kubelet.service... Dec 13 14:07:18.275645 systemd[1]: Started sshd@1-10.200.20.41:22-10.200.16.10:47056.service. Dec 13 14:07:18.509010 systemd[1]: Started kubelet.service. Dec 13 14:07:18.556674 kubelet[1951]: E1213 14:07:18.555630 1951 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 14:07:18.558642 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 14:07:18.558784 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 14:07:18.714965 sshd[1943]: Accepted publickey for core from 10.200.16.10 port 47056 ssh2: RSA SHA256:xuCpWY3jYETt01AJgPfaKRWNP61F/EGSdrGlXn/pObI Dec 13 14:07:18.716513 sshd[1943]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:07:18.720455 systemd-logind[1542]: New session 4 of user core. Dec 13 14:07:18.720804 systemd[1]: Started session-4.scope. Dec 13 14:07:19.067527 sshd[1943]: pam_unix(sshd:session): session closed for user core Dec 13 14:07:19.070547 systemd[1]: sshd@1-10.200.20.41:22-10.200.16.10:47056.service: Deactivated successfully. Dec 13 14:07:19.071281 systemd[1]: session-4.scope: Deactivated successfully. Dec 13 14:07:19.072179 systemd-logind[1542]: Session 4 logged out. Waiting for processes to exit. Dec 13 14:07:19.072883 systemd-logind[1542]: Removed session 4. Dec 13 14:07:19.136403 systemd[1]: Started sshd@2-10.200.20.41:22-10.200.16.10:44728.service. Dec 13 14:07:19.560814 sshd[1964]: Accepted publickey for core from 10.200.16.10 port 44728 ssh2: RSA SHA256:xuCpWY3jYETt01AJgPfaKRWNP61F/EGSdrGlXn/pObI Dec 13 14:07:19.562436 sshd[1964]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:07:19.566587 systemd[1]: Started session-5.scope. Dec 13 14:07:19.566891 systemd-logind[1542]: New session 5 of user core. Dec 13 14:07:19.879108 sshd[1964]: pam_unix(sshd:session): session closed for user core Dec 13 14:07:19.882008 systemd-logind[1542]: Session 5 logged out. Waiting for processes to exit. Dec 13 14:07:19.882698 systemd[1]: sshd@2-10.200.20.41:22-10.200.16.10:44728.service: Deactivated successfully. Dec 13 14:07:19.883423 systemd[1]: session-5.scope: Deactivated successfully. Dec 13 14:07:19.884021 systemd-logind[1542]: Removed session 5. Dec 13 14:07:19.948961 systemd[1]: Started sshd@3-10.200.20.41:22-10.200.16.10:44740.service. Dec 13 14:07:20.380249 sshd[1971]: Accepted publickey for core from 10.200.16.10 port 44740 ssh2: RSA SHA256:xuCpWY3jYETt01AJgPfaKRWNP61F/EGSdrGlXn/pObI Dec 13 14:07:20.381811 sshd[1971]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:07:20.385113 systemd-logind[1542]: New session 6 of user core. Dec 13 14:07:20.385824 systemd[1]: Started session-6.scope. Dec 13 14:07:20.729806 sshd[1971]: pam_unix(sshd:session): session closed for user core Dec 13 14:07:20.732585 systemd-logind[1542]: Session 6 logged out. Waiting for processes to exit. Dec 13 14:07:20.733815 systemd[1]: sshd@3-10.200.20.41:22-10.200.16.10:44740.service: Deactivated successfully. Dec 13 14:07:20.734489 systemd[1]: session-6.scope: Deactivated successfully. Dec 13 14:07:20.735849 systemd-logind[1542]: Removed session 6. Dec 13 14:07:20.799282 systemd[1]: Started sshd@4-10.200.20.41:22-10.200.16.10:44742.service. Dec 13 14:07:21.227116 sshd[1978]: Accepted publickey for core from 10.200.16.10 port 44742 ssh2: RSA SHA256:xuCpWY3jYETt01AJgPfaKRWNP61F/EGSdrGlXn/pObI Dec 13 14:07:21.228644 sshd[1978]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:07:21.232289 systemd-logind[1542]: New session 7 of user core. Dec 13 14:07:21.232670 systemd[1]: Started session-7.scope. Dec 13 14:07:21.538080 sudo[1982]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Dec 13 14:07:21.538616 sudo[1982]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Dec 13 14:07:21.558490 systemd[1]: Starting docker.service... Dec 13 14:07:21.590556 env[1992]: time="2024-12-13T14:07:21.590516067Z" level=info msg="Starting up" Dec 13 14:07:21.592024 env[1992]: time="2024-12-13T14:07:21.591995573Z" level=info msg="parsed scheme: \"unix\"" module=grpc Dec 13 14:07:21.592024 env[1992]: time="2024-12-13T14:07:21.592018832Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Dec 13 14:07:21.592171 env[1992]: time="2024-12-13T14:07:21.592037136Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Dec 13 14:07:21.592171 env[1992]: time="2024-12-13T14:07:21.592047248Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Dec 13 14:07:21.593842 env[1992]: time="2024-12-13T14:07:21.593820536Z" level=info msg="parsed scheme: \"unix\"" module=grpc Dec 13 14:07:21.593929 env[1992]: time="2024-12-13T14:07:21.593916732Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Dec 13 14:07:21.594002 env[1992]: time="2024-12-13T14:07:21.593986191Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Dec 13 14:07:21.594056 env[1992]: time="2024-12-13T14:07:21.594044700Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Dec 13 14:07:21.600724 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport4220164406-merged.mount: Deactivated successfully. Dec 13 14:07:21.689455 env[1992]: time="2024-12-13T14:07:21.689420396Z" level=warning msg="Your kernel does not support cgroup blkio weight" Dec 13 14:07:21.689636 env[1992]: time="2024-12-13T14:07:21.689622499Z" level=warning msg="Your kernel does not support cgroup blkio weight_device" Dec 13 14:07:21.690336 env[1992]: time="2024-12-13T14:07:21.689869083Z" level=info msg="Loading containers: start." Dec 13 14:07:21.791098 kernel: Initializing XFRM netlink socket Dec 13 14:07:21.817906 env[1992]: time="2024-12-13T14:07:21.817877302Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Dec 13 14:07:21.878636 systemd-networkd[1743]: docker0: Link UP Dec 13 14:07:21.902074 env[1992]: time="2024-12-13T14:07:21.902024864Z" level=info msg="Loading containers: done." Dec 13 14:07:21.926006 env[1992]: time="2024-12-13T14:07:21.925963315Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Dec 13 14:07:21.926188 env[1992]: time="2024-12-13T14:07:21.926162341Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 Dec 13 14:07:21.926278 env[1992]: time="2024-12-13T14:07:21.926256739Z" level=info msg="Daemon has completed initialization" Dec 13 14:07:21.957190 systemd[1]: Started docker.service. Dec 13 14:07:21.962842 env[1992]: time="2024-12-13T14:07:21.962792406Z" level=info msg="API listen on /run/docker.sock" Dec 13 14:07:27.899303 env[1556]: time="2024-12-13T14:07:27.899256509Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.12\"" Dec 13 14:07:28.708513 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 7. Dec 13 14:07:28.708622 systemd[1]: Stopped kubelet.service. Dec 13 14:07:28.709985 systemd[1]: Starting kubelet.service... Dec 13 14:07:28.734331 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1882566525.mount: Deactivated successfully. Dec 13 14:07:28.819500 systemd[1]: Started kubelet.service. Dec 13 14:07:28.868678 kubelet[2123]: E1213 14:07:28.868635 2123 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 14:07:28.870645 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 14:07:28.870778 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 14:07:30.901325 env[1556]: time="2024-12-13T14:07:30.901266856Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.29.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:07:30.906490 env[1556]: time="2024-12-13T14:07:30.906448763Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:50c86b7f73fdd28bacd4abf45260c9d3abc3b57eb038fa61fc45b5d0f2763e6f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:07:30.910782 env[1556]: time="2024-12-13T14:07:30.910652738Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.29.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:07:30.915730 env[1556]: time="2024-12-13T14:07:30.915700897Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:2804b1e7b9e08f3a3468f8fd2f6487c55968b9293ee51b9efb865b3298acfa26,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:07:30.916440 env[1556]: time="2024-12-13T14:07:30.916414930Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.12\" returns image reference \"sha256:50c86b7f73fdd28bacd4abf45260c9d3abc3b57eb038fa61fc45b5d0f2763e6f\"" Dec 13 14:07:30.925943 env[1556]: time="2024-12-13T14:07:30.925911577Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.12\"" Dec 13 14:07:33.319720 env[1556]: time="2024-12-13T14:07:33.319676140Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.29.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:07:33.326634 env[1556]: time="2024-12-13T14:07:33.326599429Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:2d47abaa6ccc533f84ef74fff6d509de10bb040317351b45afe95a8021a1ddf7,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:07:33.331485 env[1556]: time="2024-12-13T14:07:33.331453858Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.29.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:07:33.340266 env[1556]: time="2024-12-13T14:07:33.340222746Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:e2f26a3f5ef3fd01f6330cab8b078cf303cfb6d36911a210d0915d535910e412,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:07:33.340580 env[1556]: time="2024-12-13T14:07:33.340554258Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.12\" returns image reference \"sha256:2d47abaa6ccc533f84ef74fff6d509de10bb040317351b45afe95a8021a1ddf7\"" Dec 13 14:07:33.350861 env[1556]: time="2024-12-13T14:07:33.350797820Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.12\"" Dec 13 14:07:35.313791 env[1556]: time="2024-12-13T14:07:35.313718623Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.29.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:07:35.319463 env[1556]: time="2024-12-13T14:07:35.319424224Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ae633c52a23907b58f7a7867d2cccf3d3f5ebd8977beb6788e20fbecd3f446db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:07:35.323534 env[1556]: time="2024-12-13T14:07:35.323501195Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.29.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:07:35.327950 env[1556]: time="2024-12-13T14:07:35.327915765Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:ed66e2102f4705d45de7513decf3ac61879704984409323779d19e98b970568c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:07:35.328723 env[1556]: time="2024-12-13T14:07:35.328697499Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.12\" returns image reference \"sha256:ae633c52a23907b58f7a7867d2cccf3d3f5ebd8977beb6788e20fbecd3f446db\"" Dec 13 14:07:35.337854 env[1556]: time="2024-12-13T14:07:35.337823462Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.12\"" Dec 13 14:07:36.398099 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4140233785.mount: Deactivated successfully. Dec 13 14:07:36.957375 env[1556]: time="2024-12-13T14:07:36.957331301Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.29.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:07:36.962597 env[1556]: time="2024-12-13T14:07:36.962567823Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:768ee8cfd9311233d038d18430c18136e1ae4dd2e6de40fcf1c670bba2da6d06,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:07:36.966758 env[1556]: time="2024-12-13T14:07:36.966733406Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.29.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:07:36.969310 env[1556]: time="2024-12-13T14:07:36.969286405Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:bc761494b78fa152a759457f42bc9b86ee9d18f5929bb127bd5f72f8e2112c39,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:07:36.969652 env[1556]: time="2024-12-13T14:07:36.969620930Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.12\" returns image reference \"sha256:768ee8cfd9311233d038d18430c18136e1ae4dd2e6de40fcf1c670bba2da6d06\"" Dec 13 14:07:36.978796 env[1556]: time="2024-12-13T14:07:36.978758789Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Dec 13 14:07:37.595584 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1739862993.mount: Deactivated successfully. Dec 13 14:07:39.022445 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 8. Dec 13 14:07:39.022608 systemd[1]: Stopped kubelet.service. Dec 13 14:07:39.024031 systemd[1]: Starting kubelet.service... Dec 13 14:07:39.863616 systemd[1]: Started kubelet.service. Dec 13 14:07:39.904482 kubelet[2158]: E1213 14:07:39.904438 2158 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 14:07:39.906327 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 14:07:39.906461 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 14:07:40.563271 env[1556]: time="2024-12-13T14:07:40.563217170Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.11.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:07:40.570930 env[1556]: time="2024-12-13T14:07:40.570872449Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:07:40.575671 env[1556]: time="2024-12-13T14:07:40.575624526Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.11.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:07:40.581150 env[1556]: time="2024-12-13T14:07:40.581115497Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:07:40.581881 env[1556]: time="2024-12-13T14:07:40.581853951Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" Dec 13 14:07:40.590961 env[1556]: time="2024-12-13T14:07:40.590929968Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Dec 13 14:07:41.220868 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount672757945.mount: Deactivated successfully. Dec 13 14:07:41.244298 env[1556]: time="2024-12-13T14:07:41.244233592Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:07:41.251170 env[1556]: time="2024-12-13T14:07:41.251125762Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:07:41.255530 env[1556]: time="2024-12-13T14:07:41.255499774Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:07:41.260453 env[1556]: time="2024-12-13T14:07:41.260423746Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:07:41.261058 env[1556]: time="2024-12-13T14:07:41.261029718Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\"" Dec 13 14:07:41.270128 env[1556]: time="2024-12-13T14:07:41.270089384Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\"" Dec 13 14:07:41.906292 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2618446231.mount: Deactivated successfully. Dec 13 14:07:49.985132 env[1556]: time="2024-12-13T14:07:49.985079730Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.10-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:07:50.022437 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 9. Dec 13 14:07:50.022594 systemd[1]: Stopped kubelet.service. Dec 13 14:07:50.024006 systemd[1]: Starting kubelet.service... Dec 13 14:07:50.100186 systemd[1]: Started kubelet.service. Dec 13 14:07:50.143454 kubelet[2183]: E1213 14:07:50.143409 2183 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 14:07:50.145442 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 14:07:50.145576 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 14:07:53.187026 env[1556]: time="2024-12-13T14:07:53.186219749Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:79f8d13ae8b8839cadfb2f83416935f5184206d386028e2d1263577f0ab3620b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:07:53.227626 env[1556]: time="2024-12-13T14:07:53.227544700Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.10-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:07:53.231334 env[1556]: time="2024-12-13T14:07:53.231278926Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:07:53.233221 env[1556]: time="2024-12-13T14:07:53.232297900Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\" returns image reference \"sha256:79f8d13ae8b8839cadfb2f83416935f5184206d386028e2d1263577f0ab3620b\"" Dec 13 14:07:59.159447 systemd[1]: Stopped kubelet.service. Dec 13 14:07:59.161986 systemd[1]: Starting kubelet.service... Dec 13 14:07:59.193158 systemd[1]: Reloading. Dec 13 14:07:59.240680 /usr/lib/systemd/system-generators/torcx-generator[2276]: time="2024-12-13T14:07:59Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.6 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.6 /var/lib/torcx/store]" Dec 13 14:07:59.240716 /usr/lib/systemd/system-generators/torcx-generator[2276]: time="2024-12-13T14:07:59Z" level=info msg="torcx already run" Dec 13 14:07:59.351888 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Dec 13 14:07:59.352098 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 13 14:07:59.369916 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 14:07:59.456506 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Dec 13 14:07:59.456583 systemd[1]: kubelet.service: Failed with result 'signal'. Dec 13 14:07:59.457136 systemd[1]: Stopped kubelet.service. Dec 13 14:07:59.459429 systemd[1]: Starting kubelet.service... Dec 13 14:07:59.620830 systemd[1]: Started kubelet.service. Dec 13 14:07:59.667830 kubelet[2353]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 14:07:59.668191 kubelet[2353]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 13 14:07:59.668246 kubelet[2353]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 14:07:59.668375 kubelet[2353]: I1213 14:07:59.668342 2353 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 13 14:08:00.487382 kubelet[2353]: I1213 14:08:00.487352 2353 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Dec 13 14:08:00.487543 kubelet[2353]: I1213 14:08:00.487532 2353 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 13 14:08:00.487804 kubelet[2353]: I1213 14:08:00.487791 2353 server.go:919] "Client rotation is on, will bootstrap in background" Dec 13 14:08:00.503277 kubelet[2353]: E1213 14:08:00.503255 2353 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.200.20.41:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.200.20.41:6443: connect: connection refused Dec 13 14:08:00.503447 kubelet[2353]: I1213 14:08:00.503434 2353 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 14:08:00.512437 kubelet[2353]: I1213 14:08:00.512418 2353 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 13 14:08:00.513908 kubelet[2353]: I1213 14:08:00.513890 2353 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 13 14:08:00.514188 kubelet[2353]: I1213 14:08:00.514172 2353 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Dec 13 14:08:00.514323 kubelet[2353]: I1213 14:08:00.514310 2353 topology_manager.go:138] "Creating topology manager with none policy" Dec 13 14:08:00.514382 kubelet[2353]: I1213 14:08:00.514374 2353 container_manager_linux.go:301] "Creating device plugin manager" Dec 13 14:08:00.514535 kubelet[2353]: I1213 14:08:00.514525 2353 state_mem.go:36] "Initialized new in-memory state store" Dec 13 14:08:00.516835 kubelet[2353]: I1213 14:08:00.516820 2353 kubelet.go:396] "Attempting to sync node with API server" Dec 13 14:08:00.516930 kubelet[2353]: I1213 14:08:00.516920 2353 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 13 14:08:00.517027 kubelet[2353]: I1213 14:08:00.516995 2353 kubelet.go:312] "Adding apiserver pod source" Dec 13 14:08:00.517120 kubelet[2353]: I1213 14:08:00.517109 2353 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 13 14:08:00.518357 kubelet[2353]: I1213 14:08:00.518328 2353 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Dec 13 14:08:00.518585 kubelet[2353]: I1213 14:08:00.518560 2353 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 13 14:08:00.519299 kubelet[2353]: W1213 14:08:00.519275 2353 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Dec 13 14:08:00.519786 kubelet[2353]: I1213 14:08:00.519751 2353 server.go:1256] "Started kubelet" Dec 13 14:08:00.519893 kubelet[2353]: W1213 14:08:00.519854 2353 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://10.200.20.41:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.6-a-18113e8891&limit=500&resourceVersion=0": dial tcp 10.200.20.41:6443: connect: connection refused Dec 13 14:08:00.519970 kubelet[2353]: E1213 14:08:00.519904 2353 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.200.20.41:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.6-a-18113e8891&limit=500&resourceVersion=0": dial tcp 10.200.20.41:6443: connect: connection refused Dec 13 14:08:00.523018 kubelet[2353]: W1213 14:08:00.522986 2353 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://10.200.20.41:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.20.41:6443: connect: connection refused Dec 13 14:08:00.523142 kubelet[2353]: E1213 14:08:00.523131 2353 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.200.20.41:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.20.41:6443: connect: connection refused Dec 13 14:08:00.523310 kubelet[2353]: I1213 14:08:00.523298 2353 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 13 14:08:00.523583 kubelet[2353]: I1213 14:08:00.523569 2353 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 13 14:08:00.523707 kubelet[2353]: I1213 14:08:00.523698 2353 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Dec 13 14:08:00.524421 kubelet[2353]: I1213 14:08:00.524406 2353 server.go:461] "Adding debug handlers to kubelet server" Dec 13 14:08:00.527791 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Dec 13 14:08:00.532714 kubelet[2353]: I1213 14:08:00.532683 2353 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 13 14:08:00.534521 kubelet[2353]: E1213 14:08:00.534499 2353 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.20.41:6443/api/v1/namespaces/default/events\": dial tcp 10.200.20.41:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-3510.3.6-a-18113e8891.1810c1be82c8082b default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-3510.3.6-a-18113e8891,UID:ci-3510.3.6-a-18113e8891,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-3510.3.6-a-18113e8891,},FirstTimestamp:2024-12-13 14:08:00.519735339 +0000 UTC m=+0.889071204,LastTimestamp:2024-12-13 14:08:00.519735339 +0000 UTC m=+0.889071204,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-3510.3.6-a-18113e8891,}" Dec 13 14:08:00.536666 kubelet[2353]: E1213 14:08:00.536649 2353 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 13 14:08:00.536913 kubelet[2353]: E1213 14:08:00.536903 2353 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-3510.3.6-a-18113e8891\" not found" Dec 13 14:08:00.537026 kubelet[2353]: I1213 14:08:00.537016 2353 volume_manager.go:291] "Starting Kubelet Volume Manager" Dec 13 14:08:00.537202 kubelet[2353]: I1213 14:08:00.537190 2353 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Dec 13 14:08:00.537316 kubelet[2353]: I1213 14:08:00.537305 2353 reconciler_new.go:29] "Reconciler: start to sync state" Dec 13 14:08:00.537636 kubelet[2353]: W1213 14:08:00.537602 2353 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://10.200.20.41:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.41:6443: connect: connection refused Dec 13 14:08:00.537736 kubelet[2353]: E1213 14:08:00.537726 2353 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.200.20.41:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.41:6443: connect: connection refused Dec 13 14:08:00.538133 kubelet[2353]: E1213 14:08:00.538117 2353 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.41:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.6-a-18113e8891?timeout=10s\": dial tcp 10.200.20.41:6443: connect: connection refused" interval="200ms" Dec 13 14:08:00.538614 kubelet[2353]: I1213 14:08:00.538599 2353 factory.go:221] Registration of the systemd container factory successfully Dec 13 14:08:00.538776 kubelet[2353]: I1213 14:08:00.538761 2353 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 13 14:08:00.540199 kubelet[2353]: I1213 14:08:00.540185 2353 factory.go:221] Registration of the containerd container factory successfully Dec 13 14:08:00.579638 kubelet[2353]: I1213 14:08:00.579599 2353 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 13 14:08:00.580679 kubelet[2353]: I1213 14:08:00.580660 2353 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 13 14:08:00.580679 kubelet[2353]: I1213 14:08:00.580682 2353 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 13 14:08:00.580765 kubelet[2353]: I1213 14:08:00.580698 2353 kubelet.go:2329] "Starting kubelet main sync loop" Dec 13 14:08:00.580765 kubelet[2353]: E1213 14:08:00.580753 2353 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 13 14:08:00.581572 kubelet[2353]: W1213 14:08:00.581535 2353 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://10.200.20.41:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.41:6443: connect: connection refused Dec 13 14:08:00.581688 kubelet[2353]: E1213 14:08:00.581678 2353 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.200.20.41:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.41:6443: connect: connection refused Dec 13 14:08:00.607984 kubelet[2353]: I1213 14:08:00.607957 2353 cpu_manager.go:214] "Starting CPU manager" policy="none" Dec 13 14:08:00.608099 kubelet[2353]: I1213 14:08:00.607990 2353 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Dec 13 14:08:00.608099 kubelet[2353]: I1213 14:08:00.608009 2353 state_mem.go:36] "Initialized new in-memory state store" Dec 13 14:08:00.612252 kubelet[2353]: I1213 14:08:00.612229 2353 policy_none.go:49] "None policy: Start" Dec 13 14:08:00.612882 kubelet[2353]: I1213 14:08:00.612842 2353 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 13 14:08:00.612882 kubelet[2353]: I1213 14:08:00.612884 2353 state_mem.go:35] "Initializing new in-memory state store" Dec 13 14:08:00.619806 kubelet[2353]: I1213 14:08:00.619782 2353 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 13 14:08:00.620118 kubelet[2353]: I1213 14:08:00.620103 2353 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 13 14:08:00.622213 kubelet[2353]: E1213 14:08:00.622197 2353 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-3510.3.6-a-18113e8891\" not found" Dec 13 14:08:00.638808 kubelet[2353]: I1213 14:08:00.638792 2353 kubelet_node_status.go:73] "Attempting to register node" node="ci-3510.3.6-a-18113e8891" Dec 13 14:08:00.639290 kubelet[2353]: E1213 14:08:00.639270 2353 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.200.20.41:6443/api/v1/nodes\": dial tcp 10.200.20.41:6443: connect: connection refused" node="ci-3510.3.6-a-18113e8891" Dec 13 14:08:00.681661 kubelet[2353]: I1213 14:08:00.681633 2353 topology_manager.go:215] "Topology Admit Handler" podUID="c5b89ebb374d6cc57b614a7abc89df4d" podNamespace="kube-system" podName="kube-controller-manager-ci-3510.3.6-a-18113e8891" Dec 13 14:08:00.683098 kubelet[2353]: I1213 14:08:00.683081 2353 topology_manager.go:215] "Topology Admit Handler" podUID="b9ed8192785094e30b92eb395d00485d" podNamespace="kube-system" podName="kube-scheduler-ci-3510.3.6-a-18113e8891" Dec 13 14:08:00.684341 kubelet[2353]: I1213 14:08:00.684319 2353 topology_manager.go:215] "Topology Admit Handler" podUID="e55cd0ebd1eb8c2f0bb0d8db6633036b" podNamespace="kube-system" podName="kube-apiserver-ci-3510.3.6-a-18113e8891" Dec 13 14:08:00.740628 kubelet[2353]: E1213 14:08:00.739256 2353 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.41:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.6-a-18113e8891?timeout=10s\": dial tcp 10.200.20.41:6443: connect: connection refused" interval="400ms" Dec 13 14:08:00.838696 kubelet[2353]: I1213 14:08:00.838664 2353 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c5b89ebb374d6cc57b614a7abc89df4d-ca-certs\") pod \"kube-controller-manager-ci-3510.3.6-a-18113e8891\" (UID: \"c5b89ebb374d6cc57b614a7abc89df4d\") " pod="kube-system/kube-controller-manager-ci-3510.3.6-a-18113e8891" Dec 13 14:08:00.838814 kubelet[2353]: I1213 14:08:00.838718 2353 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/c5b89ebb374d6cc57b614a7abc89df4d-flexvolume-dir\") pod \"kube-controller-manager-ci-3510.3.6-a-18113e8891\" (UID: \"c5b89ebb374d6cc57b614a7abc89df4d\") " pod="kube-system/kube-controller-manager-ci-3510.3.6-a-18113e8891" Dec 13 14:08:00.838814 kubelet[2353]: I1213 14:08:00.838744 2353 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b9ed8192785094e30b92eb395d00485d-kubeconfig\") pod \"kube-scheduler-ci-3510.3.6-a-18113e8891\" (UID: \"b9ed8192785094e30b92eb395d00485d\") " pod="kube-system/kube-scheduler-ci-3510.3.6-a-18113e8891" Dec 13 14:08:00.838814 kubelet[2353]: I1213 14:08:00.838774 2353 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e55cd0ebd1eb8c2f0bb0d8db6633036b-k8s-certs\") pod \"kube-apiserver-ci-3510.3.6-a-18113e8891\" (UID: \"e55cd0ebd1eb8c2f0bb0d8db6633036b\") " pod="kube-system/kube-apiserver-ci-3510.3.6-a-18113e8891" Dec 13 14:08:00.838814 kubelet[2353]: I1213 14:08:00.838801 2353 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e55cd0ebd1eb8c2f0bb0d8db6633036b-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3510.3.6-a-18113e8891\" (UID: \"e55cd0ebd1eb8c2f0bb0d8db6633036b\") " pod="kube-system/kube-apiserver-ci-3510.3.6-a-18113e8891" Dec 13 14:08:00.838913 kubelet[2353]: I1213 14:08:00.838823 2353 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c5b89ebb374d6cc57b614a7abc89df4d-k8s-certs\") pod \"kube-controller-manager-ci-3510.3.6-a-18113e8891\" (UID: \"c5b89ebb374d6cc57b614a7abc89df4d\") " pod="kube-system/kube-controller-manager-ci-3510.3.6-a-18113e8891" Dec 13 14:08:00.838913 kubelet[2353]: I1213 14:08:00.838853 2353 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c5b89ebb374d6cc57b614a7abc89df4d-kubeconfig\") pod \"kube-controller-manager-ci-3510.3.6-a-18113e8891\" (UID: \"c5b89ebb374d6cc57b614a7abc89df4d\") " pod="kube-system/kube-controller-manager-ci-3510.3.6-a-18113e8891" Dec 13 14:08:00.838913 kubelet[2353]: I1213 14:08:00.838875 2353 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c5b89ebb374d6cc57b614a7abc89df4d-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3510.3.6-a-18113e8891\" (UID: \"c5b89ebb374d6cc57b614a7abc89df4d\") " pod="kube-system/kube-controller-manager-ci-3510.3.6-a-18113e8891" Dec 13 14:08:00.838913 kubelet[2353]: I1213 14:08:00.838904 2353 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e55cd0ebd1eb8c2f0bb0d8db6633036b-ca-certs\") pod \"kube-apiserver-ci-3510.3.6-a-18113e8891\" (UID: \"e55cd0ebd1eb8c2f0bb0d8db6633036b\") " pod="kube-system/kube-apiserver-ci-3510.3.6-a-18113e8891" Dec 13 14:08:00.840992 kubelet[2353]: I1213 14:08:00.840976 2353 kubelet_node_status.go:73] "Attempting to register node" node="ci-3510.3.6-a-18113e8891" Dec 13 14:08:00.841462 kubelet[2353]: E1213 14:08:00.841446 2353 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.200.20.41:6443/api/v1/nodes\": dial tcp 10.200.20.41:6443: connect: connection refused" node="ci-3510.3.6-a-18113e8891" Dec 13 14:08:00.990092 env[1556]: time="2024-12-13T14:08:00.989941666Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3510.3.6-a-18113e8891,Uid:c5b89ebb374d6cc57b614a7abc89df4d,Namespace:kube-system,Attempt:0,}" Dec 13 14:08:00.990790 env[1556]: time="2024-12-13T14:08:00.990584859Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3510.3.6-a-18113e8891,Uid:b9ed8192785094e30b92eb395d00485d,Namespace:kube-system,Attempt:0,}" Dec 13 14:08:00.993615 env[1556]: time="2024-12-13T14:08:00.993461851Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3510.3.6-a-18113e8891,Uid:e55cd0ebd1eb8c2f0bb0d8db6633036b,Namespace:kube-system,Attempt:0,}" Dec 13 14:08:01.139951 kubelet[2353]: E1213 14:08:01.139902 2353 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.41:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.6-a-18113e8891?timeout=10s\": dial tcp 10.200.20.41:6443: connect: connection refused" interval="800ms" Dec 13 14:08:01.244280 kubelet[2353]: I1213 14:08:01.243883 2353 kubelet_node_status.go:73] "Attempting to register node" node="ci-3510.3.6-a-18113e8891" Dec 13 14:08:01.244280 kubelet[2353]: E1213 14:08:01.244203 2353 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.200.20.41:6443/api/v1/nodes\": dial tcp 10.200.20.41:6443: connect: connection refused" node="ci-3510.3.6-a-18113e8891" Dec 13 14:08:01.503063 kubelet[2353]: W1213 14:08:01.502910 2353 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://10.200.20.41:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.41:6443: connect: connection refused Dec 13 14:08:01.503063 kubelet[2353]: E1213 14:08:01.502962 2353 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.200.20.41:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.41:6443: connect: connection refused Dec 13 14:08:01.598437 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1526713823.mount: Deactivated successfully. Dec 13 14:08:01.637708 env[1556]: time="2024-12-13T14:08:01.637655621Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:08:01.640563 env[1556]: time="2024-12-13T14:08:01.640528035Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:08:01.653329 env[1556]: time="2024-12-13T14:08:01.653287689Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:08:01.656216 env[1556]: time="2024-12-13T14:08:01.656185615Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:08:01.671062 env[1556]: time="2024-12-13T14:08:01.671026132Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:08:01.677360 env[1556]: time="2024-12-13T14:08:01.677309070Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:08:01.681236 env[1556]: time="2024-12-13T14:08:01.681201841Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:08:01.685781 env[1556]: time="2024-12-13T14:08:01.685740969Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:08:01.689642 env[1556]: time="2024-12-13T14:08:01.689617546Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:08:01.694383 env[1556]: time="2024-12-13T14:08:01.694344215Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:08:01.698179 env[1556]: time="2024-12-13T14:08:01.698148774Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:08:01.703362 env[1556]: time="2024-12-13T14:08:01.703330099Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:08:01.765573 env[1556]: time="2024-12-13T14:08:01.763711528Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:08:01.765573 env[1556]: time="2024-12-13T14:08:01.763832850Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:08:01.765573 env[1556]: time="2024-12-13T14:08:01.763863120Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:08:01.765573 env[1556]: time="2024-12-13T14:08:01.763997278Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/c6535012e78a2e2a5e65be7573ba8a5b51e3722389bc0d246eb0947fe9f7f10e pid=2392 runtime=io.containerd.runc.v2 Dec 13 14:08:01.772232 env[1556]: time="2024-12-13T14:08:01.772159223Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:08:01.772232 env[1556]: time="2024-12-13T14:08:01.772199810Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:08:01.772407 env[1556]: time="2024-12-13T14:08:01.772218724Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:08:01.772615 env[1556]: time="2024-12-13T14:08:01.772574891Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/d5ecc908a50eb48fc43e2c6ec0df65aa72ed9acca852a9abd1d944baf3f2e391 pid=2411 runtime=io.containerd.runc.v2 Dec 13 14:08:01.797714 env[1556]: time="2024-12-13T14:08:01.796990948Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:08:01.797714 env[1556]: time="2024-12-13T14:08:01.797030415Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:08:01.797714 env[1556]: time="2024-12-13T14:08:01.797040452Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:08:01.797714 env[1556]: time="2024-12-13T14:08:01.797293452Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/2fe2decb1d358c76267d1c82f41eae927a51f9ff7dcd586649de1241e8702ade pid=2455 runtime=io.containerd.runc.v2 Dec 13 14:08:01.830990 kubelet[2353]: W1213 14:08:01.830946 2353 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://10.200.20.41:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.41:6443: connect: connection refused Dec 13 14:08:01.830990 kubelet[2353]: E1213 14:08:01.830994 2353 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.200.20.41:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.41:6443: connect: connection refused Dec 13 14:08:01.838457 env[1556]: time="2024-12-13T14:08:01.838400362Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3510.3.6-a-18113e8891,Uid:b9ed8192785094e30b92eb395d00485d,Namespace:kube-system,Attempt:0,} returns sandbox id \"c6535012e78a2e2a5e65be7573ba8a5b51e3722389bc0d246eb0947fe9f7f10e\"" Dec 13 14:08:01.847210 env[1556]: time="2024-12-13T14:08:01.847171715Z" level=info msg="CreateContainer within sandbox \"c6535012e78a2e2a5e65be7573ba8a5b51e3722389bc0d246eb0947fe9f7f10e\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Dec 13 14:08:01.850457 kubelet[2353]: W1213 14:08:01.850403 2353 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://10.200.20.41:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.20.41:6443: connect: connection refused Dec 13 14:08:01.850457 kubelet[2353]: E1213 14:08:01.850462 2353 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.200.20.41:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.20.41:6443: connect: connection refused Dec 13 14:08:01.852437 env[1556]: time="2024-12-13T14:08:01.852374433Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3510.3.6-a-18113e8891,Uid:e55cd0ebd1eb8c2f0bb0d8db6633036b,Namespace:kube-system,Attempt:0,} returns sandbox id \"d5ecc908a50eb48fc43e2c6ec0df65aa72ed9acca852a9abd1d944baf3f2e391\"" Dec 13 14:08:01.857281 env[1556]: time="2024-12-13T14:08:01.856366533Z" level=info msg="CreateContainer within sandbox \"d5ecc908a50eb48fc43e2c6ec0df65aa72ed9acca852a9abd1d944baf3f2e391\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Dec 13 14:08:01.879787 env[1556]: time="2024-12-13T14:08:01.879746277Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3510.3.6-a-18113e8891,Uid:c5b89ebb374d6cc57b614a7abc89df4d,Namespace:kube-system,Attempt:0,} returns sandbox id \"2fe2decb1d358c76267d1c82f41eae927a51f9ff7dcd586649de1241e8702ade\"" Dec 13 14:08:01.882767 env[1556]: time="2024-12-13T14:08:01.882721418Z" level=info msg="CreateContainer within sandbox \"2fe2decb1d358c76267d1c82f41eae927a51f9ff7dcd586649de1241e8702ade\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Dec 13 14:08:01.886192 kubelet[2353]: W1213 14:08:01.886120 2353 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://10.200.20.41:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.6-a-18113e8891&limit=500&resourceVersion=0": dial tcp 10.200.20.41:6443: connect: connection refused Dec 13 14:08:01.886192 kubelet[2353]: E1213 14:08:01.886195 2353 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.200.20.41:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.6-a-18113e8891&limit=500&resourceVersion=0": dial tcp 10.200.20.41:6443: connect: connection refused Dec 13 14:08:01.936646 env[1556]: time="2024-12-13T14:08:01.936595779Z" level=info msg="CreateContainer within sandbox \"d5ecc908a50eb48fc43e2c6ec0df65aa72ed9acca852a9abd1d944baf3f2e391\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"8e7b8027ec4c790988bfaccf68258bfc0f91e529a81b629ad3d25472d5d792cc\"" Dec 13 14:08:01.937545 env[1556]: time="2024-12-13T14:08:01.937517689Z" level=info msg="StartContainer for \"8e7b8027ec4c790988bfaccf68258bfc0f91e529a81b629ad3d25472d5d792cc\"" Dec 13 14:08:01.940919 kubelet[2353]: E1213 14:08:01.940888 2353 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.41:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.6-a-18113e8891?timeout=10s\": dial tcp 10.200.20.41:6443: connect: connection refused" interval="1.6s" Dec 13 14:08:01.941478 env[1556]: time="2024-12-13T14:08:01.941438492Z" level=info msg="CreateContainer within sandbox \"c6535012e78a2e2a5e65be7573ba8a5b51e3722389bc0d246eb0947fe9f7f10e\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"d1551e6e53268393f1db8e470a712584ec952bd621852a656e2a5eb088faf401\"" Dec 13 14:08:01.943915 env[1556]: time="2024-12-13T14:08:01.943887679Z" level=info msg="StartContainer for \"d1551e6e53268393f1db8e470a712584ec952bd621852a656e2a5eb088faf401\"" Dec 13 14:08:01.948609 env[1556]: time="2024-12-13T14:08:01.948560524Z" level=info msg="CreateContainer within sandbox \"2fe2decb1d358c76267d1c82f41eae927a51f9ff7dcd586649de1241e8702ade\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"7153566244a7b491df00a666ee54de4b3790418884abc2c8489e58c11f01e781\"" Dec 13 14:08:01.949120 env[1556]: time="2024-12-13T14:08:01.949089358Z" level=info msg="StartContainer for \"7153566244a7b491df00a666ee54de4b3790418884abc2c8489e58c11f01e781\"" Dec 13 14:08:02.021941 env[1556]: time="2024-12-13T14:08:02.021845303Z" level=info msg="StartContainer for \"8e7b8027ec4c790988bfaccf68258bfc0f91e529a81b629ad3d25472d5d792cc\" returns successfully" Dec 13 14:08:02.046636 env[1556]: time="2024-12-13T14:08:02.046590705Z" level=info msg="StartContainer for \"d1551e6e53268393f1db8e470a712584ec952bd621852a656e2a5eb088faf401\" returns successfully" Dec 13 14:08:02.047220 kubelet[2353]: I1213 14:08:02.047190 2353 kubelet_node_status.go:73] "Attempting to register node" node="ci-3510.3.6-a-18113e8891" Dec 13 14:08:02.047675 kubelet[2353]: E1213 14:08:02.047656 2353 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.200.20.41:6443/api/v1/nodes\": dial tcp 10.200.20.41:6443: connect: connection refused" node="ci-3510.3.6-a-18113e8891" Dec 13 14:08:02.059580 env[1556]: time="2024-12-13T14:08:02.059532589Z" level=info msg="StartContainer for \"7153566244a7b491df00a666ee54de4b3790418884abc2c8489e58c11f01e781\" returns successfully" Dec 13 14:08:03.649550 kubelet[2353]: I1213 14:08:03.649519 2353 kubelet_node_status.go:73] "Attempting to register node" node="ci-3510.3.6-a-18113e8891" Dec 13 14:08:03.995495 kubelet[2353]: I1213 14:08:03.995395 2353 kubelet_node_status.go:76] "Successfully registered node" node="ci-3510.3.6-a-18113e8891" Dec 13 14:08:04.035535 kubelet[2353]: E1213 14:08:04.035471 2353 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-3510.3.6-a-18113e8891\" not found" Dec 13 14:08:04.127834 kubelet[2353]: E1213 14:08:04.127783 2353 controller.go:145] "Failed to ensure lease exists, will retry" err="namespaces \"kube-node-lease\" not found" interval="3.2s" Dec 13 14:08:04.135993 kubelet[2353]: E1213 14:08:04.135942 2353 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-3510.3.6-a-18113e8891\" not found" Dec 13 14:08:04.236700 kubelet[2353]: E1213 14:08:04.236657 2353 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-3510.3.6-a-18113e8891\" not found" Dec 13 14:08:04.337003 kubelet[2353]: E1213 14:08:04.336960 2353 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-3510.3.6-a-18113e8891\" not found" Dec 13 14:08:04.437922 kubelet[2353]: E1213 14:08:04.437862 2353 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-3510.3.6-a-18113e8891\" not found" Dec 13 14:08:04.538598 kubelet[2353]: E1213 14:08:04.538558 2353 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-3510.3.6-a-18113e8891\" not found" Dec 13 14:08:04.639509 kubelet[2353]: E1213 14:08:04.639397 2353 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-3510.3.6-a-18113e8891\" not found" Dec 13 14:08:04.740120 kubelet[2353]: E1213 14:08:04.740080 2353 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-3510.3.6-a-18113e8891\" not found" Dec 13 14:08:04.977224 kubelet[2353]: W1213 14:08:04.977107 2353 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Dec 13 14:08:05.526387 kubelet[2353]: I1213 14:08:05.526346 2353 apiserver.go:52] "Watching apiserver" Dec 13 14:08:05.538121 kubelet[2353]: I1213 14:08:05.538088 2353 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Dec 13 14:08:06.077637 kubelet[2353]: W1213 14:08:06.077600 2353 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Dec 13 14:08:06.773939 systemd[1]: Reloading. Dec 13 14:08:06.843204 /usr/lib/systemd/system-generators/torcx-generator[2641]: time="2024-12-13T14:08:06Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.6 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.6 /var/lib/torcx/store]" Dec 13 14:08:06.843231 /usr/lib/systemd/system-generators/torcx-generator[2641]: time="2024-12-13T14:08:06Z" level=info msg="torcx already run" Dec 13 14:08:06.925774 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Dec 13 14:08:06.925928 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 13 14:08:06.943331 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 14:08:07.026774 kubelet[2353]: I1213 14:08:07.026679 2353 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 14:08:07.027223 systemd[1]: Stopping kubelet.service... Dec 13 14:08:07.045765 systemd[1]: kubelet.service: Deactivated successfully. Dec 13 14:08:07.046240 systemd[1]: Stopped kubelet.service. Dec 13 14:08:07.048789 systemd[1]: Starting kubelet.service... Dec 13 14:08:07.207146 systemd[1]: Started kubelet.service. Dec 13 14:08:07.272366 kubelet[2717]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 14:08:07.272718 kubelet[2717]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 13 14:08:07.272764 kubelet[2717]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 14:08:07.272886 kubelet[2717]: I1213 14:08:07.272851 2717 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 13 14:08:07.277609 kubelet[2717]: I1213 14:08:07.277518 2717 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Dec 13 14:08:07.277609 kubelet[2717]: I1213 14:08:07.277549 2717 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 13 14:08:07.278187 kubelet[2717]: I1213 14:08:07.278166 2717 server.go:919] "Client rotation is on, will bootstrap in background" Dec 13 14:08:07.279645 kubelet[2717]: I1213 14:08:07.279621 2717 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Dec 13 14:08:07.282516 kubelet[2717]: I1213 14:08:07.282488 2717 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 14:08:07.294462 kubelet[2717]: I1213 14:08:07.294431 2717 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 13 14:08:07.294845 kubelet[2717]: I1213 14:08:07.294823 2717 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 13 14:08:07.294988 kubelet[2717]: I1213 14:08:07.294970 2717 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Dec 13 14:08:07.295107 kubelet[2717]: I1213 14:08:07.294992 2717 topology_manager.go:138] "Creating topology manager with none policy" Dec 13 14:08:07.295107 kubelet[2717]: I1213 14:08:07.295001 2717 container_manager_linux.go:301] "Creating device plugin manager" Dec 13 14:08:07.295107 kubelet[2717]: I1213 14:08:07.295029 2717 state_mem.go:36] "Initialized new in-memory state store" Dec 13 14:08:07.296453 kubelet[2717]: I1213 14:08:07.296098 2717 kubelet.go:396] "Attempting to sync node with API server" Dec 13 14:08:07.296453 kubelet[2717]: I1213 14:08:07.296184 2717 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 13 14:08:07.296453 kubelet[2717]: I1213 14:08:07.296212 2717 kubelet.go:312] "Adding apiserver pod source" Dec 13 14:08:07.296453 kubelet[2717]: I1213 14:08:07.296248 2717 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 13 14:08:07.301982 kubelet[2717]: I1213 14:08:07.301964 2717 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Dec 13 14:08:07.302314 kubelet[2717]: I1213 14:08:07.302301 2717 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 13 14:08:07.303524 kubelet[2717]: I1213 14:08:07.303506 2717 server.go:1256] "Started kubelet" Dec 13 14:08:07.305428 kubelet[2717]: I1213 14:08:07.305412 2717 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 13 14:08:07.314015 kubelet[2717]: I1213 14:08:07.313997 2717 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Dec 13 14:08:07.314806 kubelet[2717]: I1213 14:08:07.314790 2717 server.go:461] "Adding debug handlers to kubelet server" Dec 13 14:08:07.315836 kubelet[2717]: I1213 14:08:07.315819 2717 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 13 14:08:07.316051 kubelet[2717]: I1213 14:08:07.316039 2717 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 13 14:08:07.317324 kubelet[2717]: I1213 14:08:07.317305 2717 volume_manager.go:291] "Starting Kubelet Volume Manager" Dec 13 14:08:07.318997 kubelet[2717]: I1213 14:08:07.318978 2717 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Dec 13 14:08:07.319226 kubelet[2717]: I1213 14:08:07.319214 2717 reconciler_new.go:29] "Reconciler: start to sync state" Dec 13 14:08:07.320778 kubelet[2717]: I1213 14:08:07.320762 2717 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 13 14:08:07.321851 kubelet[2717]: I1213 14:08:07.321835 2717 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 13 14:08:07.321957 kubelet[2717]: I1213 14:08:07.321947 2717 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 13 14:08:07.322028 kubelet[2717]: I1213 14:08:07.322019 2717 kubelet.go:2329] "Starting kubelet main sync loop" Dec 13 14:08:07.322146 kubelet[2717]: E1213 14:08:07.322137 2717 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 13 14:08:07.334057 kubelet[2717]: I1213 14:08:07.334037 2717 factory.go:221] Registration of the systemd container factory successfully Dec 13 14:08:07.334269 kubelet[2717]: I1213 14:08:07.334250 2717 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 13 14:08:07.335479 kubelet[2717]: I1213 14:08:07.335462 2717 factory.go:221] Registration of the containerd container factory successfully Dec 13 14:08:07.390975 kubelet[2717]: I1213 14:08:07.390942 2717 cpu_manager.go:214] "Starting CPU manager" policy="none" Dec 13 14:08:07.390975 kubelet[2717]: I1213 14:08:07.390966 2717 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Dec 13 14:08:07.390975 kubelet[2717]: I1213 14:08:07.390984 2717 state_mem.go:36] "Initialized new in-memory state store" Dec 13 14:08:07.391188 kubelet[2717]: I1213 14:08:07.391181 2717 state_mem.go:88] "Updated default CPUSet" cpuSet="" Dec 13 14:08:07.391215 kubelet[2717]: I1213 14:08:07.391202 2717 state_mem.go:96] "Updated CPUSet assignments" assignments={} Dec 13 14:08:07.391215 kubelet[2717]: I1213 14:08:07.391209 2717 policy_none.go:49] "None policy: Start" Dec 13 14:08:07.391939 kubelet[2717]: I1213 14:08:07.391908 2717 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 13 14:08:07.392048 kubelet[2717]: I1213 14:08:07.392022 2717 state_mem.go:35] "Initializing new in-memory state store" Dec 13 14:08:07.392299 kubelet[2717]: I1213 14:08:07.392286 2717 state_mem.go:75] "Updated machine memory state" Dec 13 14:08:07.393999 kubelet[2717]: I1213 14:08:07.393981 2717 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 13 14:08:07.395937 kubelet[2717]: I1213 14:08:07.395893 2717 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 13 14:08:07.419950 kubelet[2717]: I1213 14:08:07.419923 2717 kubelet_node_status.go:73] "Attempting to register node" node="ci-3510.3.6-a-18113e8891" Dec 13 14:08:07.422837 kubelet[2717]: I1213 14:08:07.422809 2717 topology_manager.go:215] "Topology Admit Handler" podUID="b9ed8192785094e30b92eb395d00485d" podNamespace="kube-system" podName="kube-scheduler-ci-3510.3.6-a-18113e8891" Dec 13 14:08:07.422997 kubelet[2717]: I1213 14:08:07.422893 2717 topology_manager.go:215] "Topology Admit Handler" podUID="e55cd0ebd1eb8c2f0bb0d8db6633036b" podNamespace="kube-system" podName="kube-apiserver-ci-3510.3.6-a-18113e8891" Dec 13 14:08:07.423129 kubelet[2717]: I1213 14:08:07.423110 2717 topology_manager.go:215] "Topology Admit Handler" podUID="c5b89ebb374d6cc57b614a7abc89df4d" podNamespace="kube-system" podName="kube-controller-manager-ci-3510.3.6-a-18113e8891" Dec 13 14:08:07.436273 kubelet[2717]: W1213 14:08:07.436090 2717 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Dec 13 14:08:07.448766 kubelet[2717]: W1213 14:08:07.448263 2717 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Dec 13 14:08:07.448766 kubelet[2717]: E1213 14:08:07.448314 2717 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-3510.3.6-a-18113e8891\" already exists" pod="kube-system/kube-apiserver-ci-3510.3.6-a-18113e8891" Dec 13 14:08:07.448766 kubelet[2717]: W1213 14:08:07.448384 2717 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Dec 13 14:08:07.448766 kubelet[2717]: E1213 14:08:07.448413 2717 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ci-3510.3.6-a-18113e8891\" already exists" pod="kube-system/kube-controller-manager-ci-3510.3.6-a-18113e8891" Dec 13 14:08:07.449506 kubelet[2717]: I1213 14:08:07.449485 2717 kubelet_node_status.go:112] "Node was previously registered" node="ci-3510.3.6-a-18113e8891" Dec 13 14:08:07.449572 kubelet[2717]: I1213 14:08:07.449551 2717 kubelet_node_status.go:76] "Successfully registered node" node="ci-3510.3.6-a-18113e8891" Dec 13 14:08:07.492843 sudo[2746]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Dec 13 14:08:07.493050 sudo[2746]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0) Dec 13 14:08:07.620299 kubelet[2717]: I1213 14:08:07.620196 2717 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e55cd0ebd1eb8c2f0bb0d8db6633036b-ca-certs\") pod \"kube-apiserver-ci-3510.3.6-a-18113e8891\" (UID: \"e55cd0ebd1eb8c2f0bb0d8db6633036b\") " pod="kube-system/kube-apiserver-ci-3510.3.6-a-18113e8891" Dec 13 14:08:07.620534 kubelet[2717]: I1213 14:08:07.620497 2717 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c5b89ebb374d6cc57b614a7abc89df4d-ca-certs\") pod \"kube-controller-manager-ci-3510.3.6-a-18113e8891\" (UID: \"c5b89ebb374d6cc57b614a7abc89df4d\") " pod="kube-system/kube-controller-manager-ci-3510.3.6-a-18113e8891" Dec 13 14:08:07.620631 kubelet[2717]: I1213 14:08:07.620622 2717 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c5b89ebb374d6cc57b614a7abc89df4d-k8s-certs\") pod \"kube-controller-manager-ci-3510.3.6-a-18113e8891\" (UID: \"c5b89ebb374d6cc57b614a7abc89df4d\") " pod="kube-system/kube-controller-manager-ci-3510.3.6-a-18113e8891" Dec 13 14:08:07.620731 kubelet[2717]: I1213 14:08:07.620722 2717 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c5b89ebb374d6cc57b614a7abc89df4d-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3510.3.6-a-18113e8891\" (UID: \"c5b89ebb374d6cc57b614a7abc89df4d\") " pod="kube-system/kube-controller-manager-ci-3510.3.6-a-18113e8891" Dec 13 14:08:07.620839 kubelet[2717]: I1213 14:08:07.620830 2717 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b9ed8192785094e30b92eb395d00485d-kubeconfig\") pod \"kube-scheduler-ci-3510.3.6-a-18113e8891\" (UID: \"b9ed8192785094e30b92eb395d00485d\") " pod="kube-system/kube-scheduler-ci-3510.3.6-a-18113e8891" Dec 13 14:08:07.620937 kubelet[2717]: I1213 14:08:07.620928 2717 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e55cd0ebd1eb8c2f0bb0d8db6633036b-k8s-certs\") pod \"kube-apiserver-ci-3510.3.6-a-18113e8891\" (UID: \"e55cd0ebd1eb8c2f0bb0d8db6633036b\") " pod="kube-system/kube-apiserver-ci-3510.3.6-a-18113e8891" Dec 13 14:08:07.621044 kubelet[2717]: I1213 14:08:07.621034 2717 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e55cd0ebd1eb8c2f0bb0d8db6633036b-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3510.3.6-a-18113e8891\" (UID: \"e55cd0ebd1eb8c2f0bb0d8db6633036b\") " pod="kube-system/kube-apiserver-ci-3510.3.6-a-18113e8891" Dec 13 14:08:07.621178 kubelet[2717]: I1213 14:08:07.621167 2717 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/c5b89ebb374d6cc57b614a7abc89df4d-flexvolume-dir\") pod \"kube-controller-manager-ci-3510.3.6-a-18113e8891\" (UID: \"c5b89ebb374d6cc57b614a7abc89df4d\") " pod="kube-system/kube-controller-manager-ci-3510.3.6-a-18113e8891" Dec 13 14:08:07.621301 kubelet[2717]: I1213 14:08:07.621282 2717 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c5b89ebb374d6cc57b614a7abc89df4d-kubeconfig\") pod \"kube-controller-manager-ci-3510.3.6-a-18113e8891\" (UID: \"c5b89ebb374d6cc57b614a7abc89df4d\") " pod="kube-system/kube-controller-manager-ci-3510.3.6-a-18113e8891" Dec 13 14:08:08.002501 sudo[2746]: pam_unix(sudo:session): session closed for user root Dec 13 14:08:08.297483 kubelet[2717]: I1213 14:08:08.297451 2717 apiserver.go:52] "Watching apiserver" Dec 13 14:08:08.319572 kubelet[2717]: I1213 14:08:08.319540 2717 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Dec 13 14:08:08.407171 kubelet[2717]: I1213 14:08:08.407137 2717 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-3510.3.6-a-18113e8891" podStartSLOduration=2.407093892 podStartE2EDuration="2.407093892s" podCreationTimestamp="2024-12-13 14:08:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 14:08:08.39037136 +0000 UTC m=+1.174347192" watchObservedRunningTime="2024-12-13 14:08:08.407093892 +0000 UTC m=+1.191069724" Dec 13 14:08:08.416831 kubelet[2717]: I1213 14:08:08.416799 2717 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-3510.3.6-a-18113e8891" podStartSLOduration=4.416759343 podStartE2EDuration="4.416759343s" podCreationTimestamp="2024-12-13 14:08:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 14:08:08.407928945 +0000 UTC m=+1.191904777" watchObservedRunningTime="2024-12-13 14:08:08.416759343 +0000 UTC m=+1.200735175" Dec 13 14:08:08.428086 kubelet[2717]: I1213 14:08:08.428037 2717 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-3510.3.6-a-18113e8891" podStartSLOduration=1.427999046 podStartE2EDuration="1.427999046s" podCreationTimestamp="2024-12-13 14:08:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 14:08:08.41761595 +0000 UTC m=+1.201591782" watchObservedRunningTime="2024-12-13 14:08:08.427999046 +0000 UTC m=+1.211974878" Dec 13 14:08:09.887515 sudo[1982]: pam_unix(sudo:session): session closed for user root Dec 13 14:08:09.958213 sshd[1978]: pam_unix(sshd:session): session closed for user core Dec 13 14:08:09.961374 systemd[1]: sshd@4-10.200.20.41:22-10.200.16.10:44742.service: Deactivated successfully. Dec 13 14:08:09.962345 systemd[1]: session-7.scope: Deactivated successfully. Dec 13 14:08:09.962386 systemd-logind[1542]: Session 7 logged out. Waiting for processes to exit. Dec 13 14:08:09.963351 systemd-logind[1542]: Removed session 7. Dec 13 14:08:21.179474 kubelet[2717]: I1213 14:08:21.179442 2717 kuberuntime_manager.go:1529] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Dec 13 14:08:21.179901 env[1556]: time="2024-12-13T14:08:21.179779179Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Dec 13 14:08:21.180122 kubelet[2717]: I1213 14:08:21.179946 2717 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Dec 13 14:08:22.032555 kubelet[2717]: I1213 14:08:22.032504 2717 topology_manager.go:215] "Topology Admit Handler" podUID="92a4f382-7eda-4c40-b9cc-412d9578ef23" podNamespace="kube-system" podName="kube-proxy-8lm8p" Dec 13 14:08:22.046946 kubelet[2717]: I1213 14:08:22.046910 2717 topology_manager.go:215] "Topology Admit Handler" podUID="07dc6c93-9c4d-4401-befa-fb116e2f15c6" podNamespace="kube-system" podName="cilium-kmlcf" Dec 13 14:08:22.089880 kubelet[2717]: I1213 14:08:22.089846 2717 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/07dc6c93-9c4d-4401-befa-fb116e2f15c6-hostproc\") pod \"cilium-kmlcf\" (UID: \"07dc6c93-9c4d-4401-befa-fb116e2f15c6\") " pod="kube-system/cilium-kmlcf" Dec 13 14:08:22.090102 kubelet[2717]: I1213 14:08:22.090090 2717 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/07dc6c93-9c4d-4401-befa-fb116e2f15c6-xtables-lock\") pod \"cilium-kmlcf\" (UID: \"07dc6c93-9c4d-4401-befa-fb116e2f15c6\") " pod="kube-system/cilium-kmlcf" Dec 13 14:08:22.090207 kubelet[2717]: I1213 14:08:22.090197 2717 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/92a4f382-7eda-4c40-b9cc-412d9578ef23-lib-modules\") pod \"kube-proxy-8lm8p\" (UID: \"92a4f382-7eda-4c40-b9cc-412d9578ef23\") " pod="kube-system/kube-proxy-8lm8p" Dec 13 14:08:22.090317 kubelet[2717]: I1213 14:08:22.090306 2717 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8j4xb\" (UniqueName: \"kubernetes.io/projected/92a4f382-7eda-4c40-b9cc-412d9578ef23-kube-api-access-8j4xb\") pod \"kube-proxy-8lm8p\" (UID: \"92a4f382-7eda-4c40-b9cc-412d9578ef23\") " pod="kube-system/kube-proxy-8lm8p" Dec 13 14:08:22.090414 kubelet[2717]: I1213 14:08:22.090404 2717 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/07dc6c93-9c4d-4401-befa-fb116e2f15c6-cilium-run\") pod \"cilium-kmlcf\" (UID: \"07dc6c93-9c4d-4401-befa-fb116e2f15c6\") " pod="kube-system/cilium-kmlcf" Dec 13 14:08:22.090508 kubelet[2717]: I1213 14:08:22.090497 2717 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/07dc6c93-9c4d-4401-befa-fb116e2f15c6-cilium-cgroup\") pod \"cilium-kmlcf\" (UID: \"07dc6c93-9c4d-4401-befa-fb116e2f15c6\") " pod="kube-system/cilium-kmlcf" Dec 13 14:08:22.090601 kubelet[2717]: I1213 14:08:22.090590 2717 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/92a4f382-7eda-4c40-b9cc-412d9578ef23-kube-proxy\") pod \"kube-proxy-8lm8p\" (UID: \"92a4f382-7eda-4c40-b9cc-412d9578ef23\") " pod="kube-system/kube-proxy-8lm8p" Dec 13 14:08:22.090717 kubelet[2717]: I1213 14:08:22.090707 2717 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/07dc6c93-9c4d-4401-befa-fb116e2f15c6-lib-modules\") pod \"cilium-kmlcf\" (UID: \"07dc6c93-9c4d-4401-befa-fb116e2f15c6\") " pod="kube-system/cilium-kmlcf" Dec 13 14:08:22.090816 kubelet[2717]: I1213 14:08:22.090806 2717 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-79r5f\" (UniqueName: \"kubernetes.io/projected/07dc6c93-9c4d-4401-befa-fb116e2f15c6-kube-api-access-79r5f\") pod \"cilium-kmlcf\" (UID: \"07dc6c93-9c4d-4401-befa-fb116e2f15c6\") " pod="kube-system/cilium-kmlcf" Dec 13 14:08:22.090915 kubelet[2717]: I1213 14:08:22.090906 2717 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/07dc6c93-9c4d-4401-befa-fb116e2f15c6-host-proc-sys-kernel\") pod \"cilium-kmlcf\" (UID: \"07dc6c93-9c4d-4401-befa-fb116e2f15c6\") " pod="kube-system/cilium-kmlcf" Dec 13 14:08:22.091011 kubelet[2717]: I1213 14:08:22.091001 2717 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/07dc6c93-9c4d-4401-befa-fb116e2f15c6-hubble-tls\") pod \"cilium-kmlcf\" (UID: \"07dc6c93-9c4d-4401-befa-fb116e2f15c6\") " pod="kube-system/cilium-kmlcf" Dec 13 14:08:22.091117 kubelet[2717]: I1213 14:08:22.091107 2717 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/07dc6c93-9c4d-4401-befa-fb116e2f15c6-bpf-maps\") pod \"cilium-kmlcf\" (UID: \"07dc6c93-9c4d-4401-befa-fb116e2f15c6\") " pod="kube-system/cilium-kmlcf" Dec 13 14:08:22.091223 kubelet[2717]: I1213 14:08:22.091213 2717 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/07dc6c93-9c4d-4401-befa-fb116e2f15c6-cni-path\") pod \"cilium-kmlcf\" (UID: \"07dc6c93-9c4d-4401-befa-fb116e2f15c6\") " pod="kube-system/cilium-kmlcf" Dec 13 14:08:22.091325 kubelet[2717]: I1213 14:08:22.091315 2717 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/07dc6c93-9c4d-4401-befa-fb116e2f15c6-etc-cni-netd\") pod \"cilium-kmlcf\" (UID: \"07dc6c93-9c4d-4401-befa-fb116e2f15c6\") " pod="kube-system/cilium-kmlcf" Dec 13 14:08:22.091423 kubelet[2717]: I1213 14:08:22.091412 2717 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/07dc6c93-9c4d-4401-befa-fb116e2f15c6-clustermesh-secrets\") pod \"cilium-kmlcf\" (UID: \"07dc6c93-9c4d-4401-befa-fb116e2f15c6\") " pod="kube-system/cilium-kmlcf" Dec 13 14:08:22.091518 kubelet[2717]: I1213 14:08:22.091508 2717 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/92a4f382-7eda-4c40-b9cc-412d9578ef23-xtables-lock\") pod \"kube-proxy-8lm8p\" (UID: \"92a4f382-7eda-4c40-b9cc-412d9578ef23\") " pod="kube-system/kube-proxy-8lm8p" Dec 13 14:08:22.091615 kubelet[2717]: I1213 14:08:22.091605 2717 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/07dc6c93-9c4d-4401-befa-fb116e2f15c6-cilium-config-path\") pod \"cilium-kmlcf\" (UID: \"07dc6c93-9c4d-4401-befa-fb116e2f15c6\") " pod="kube-system/cilium-kmlcf" Dec 13 14:08:22.091725 kubelet[2717]: I1213 14:08:22.091715 2717 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/07dc6c93-9c4d-4401-befa-fb116e2f15c6-host-proc-sys-net\") pod \"cilium-kmlcf\" (UID: \"07dc6c93-9c4d-4401-befa-fb116e2f15c6\") " pod="kube-system/cilium-kmlcf" Dec 13 14:08:22.132994 kubelet[2717]: I1213 14:08:22.132954 2717 topology_manager.go:215] "Topology Admit Handler" podUID="8e260ecb-657a-4bb9-bbd6-58568492ad11" podNamespace="kube-system" podName="cilium-operator-5cc964979-thcrx" Dec 13 14:08:22.192695 kubelet[2717]: I1213 14:08:22.192650 2717 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tvrwl\" (UniqueName: \"kubernetes.io/projected/8e260ecb-657a-4bb9-bbd6-58568492ad11-kube-api-access-tvrwl\") pod \"cilium-operator-5cc964979-thcrx\" (UID: \"8e260ecb-657a-4bb9-bbd6-58568492ad11\") " pod="kube-system/cilium-operator-5cc964979-thcrx" Dec 13 14:08:22.193787 kubelet[2717]: I1213 14:08:22.193768 2717 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8e260ecb-657a-4bb9-bbd6-58568492ad11-cilium-config-path\") pod \"cilium-operator-5cc964979-thcrx\" (UID: \"8e260ecb-657a-4bb9-bbd6-58568492ad11\") " pod="kube-system/cilium-operator-5cc964979-thcrx" Dec 13 14:08:22.336720 env[1556]: time="2024-12-13T14:08:22.336612180Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-8lm8p,Uid:92a4f382-7eda-4c40-b9cc-412d9578ef23,Namespace:kube-system,Attempt:0,}" Dec 13 14:08:22.352648 env[1556]: time="2024-12-13T14:08:22.352360925Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-kmlcf,Uid:07dc6c93-9c4d-4401-befa-fb116e2f15c6,Namespace:kube-system,Attempt:0,}" Dec 13 14:08:22.381427 env[1556]: time="2024-12-13T14:08:22.381354498Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:08:22.381427 env[1556]: time="2024-12-13T14:08:22.381395169Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:08:22.381626 env[1556]: time="2024-12-13T14:08:22.381413166Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:08:22.381959 env[1556]: time="2024-12-13T14:08:22.381820400Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/dd33b12ad185080a84801f25f6b9f68027f8e449059dd34cca8b87c4e4d8861c pid=2801 runtime=io.containerd.runc.v2 Dec 13 14:08:22.401881 env[1556]: time="2024-12-13T14:08:22.399084788Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:08:22.401881 env[1556]: time="2024-12-13T14:08:22.399124020Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:08:22.401881 env[1556]: time="2024-12-13T14:08:22.399133738Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:08:22.401881 env[1556]: time="2024-12-13T14:08:22.399227358Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/0915bd42418be238d8049f315d26b683dc4a89a1883cf1b1e8217a9167498a27 pid=2825 runtime=io.containerd.runc.v2 Dec 13 14:08:22.438283 env[1556]: time="2024-12-13T14:08:22.438245433Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5cc964979-thcrx,Uid:8e260ecb-657a-4bb9-bbd6-58568492ad11,Namespace:kube-system,Attempt:0,}" Dec 13 14:08:22.443787 env[1556]: time="2024-12-13T14:08:22.443750801Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-kmlcf,Uid:07dc6c93-9c4d-4401-befa-fb116e2f15c6,Namespace:kube-system,Attempt:0,} returns sandbox id \"0915bd42418be238d8049f315d26b683dc4a89a1883cf1b1e8217a9167498a27\"" Dec 13 14:08:22.448504 env[1556]: time="2024-12-13T14:08:22.448471574Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Dec 13 14:08:22.449731 env[1556]: time="2024-12-13T14:08:22.449686439Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-8lm8p,Uid:92a4f382-7eda-4c40-b9cc-412d9578ef23,Namespace:kube-system,Attempt:0,} returns sandbox id \"dd33b12ad185080a84801f25f6b9f68027f8e449059dd34cca8b87c4e4d8861c\"" Dec 13 14:08:22.453520 env[1556]: time="2024-12-13T14:08:22.453394184Z" level=info msg="CreateContainer within sandbox \"dd33b12ad185080a84801f25f6b9f68027f8e449059dd34cca8b87c4e4d8861c\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Dec 13 14:08:22.485246 env[1556]: time="2024-12-13T14:08:22.485089591Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:08:22.485246 env[1556]: time="2024-12-13T14:08:22.485145140Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:08:22.485246 env[1556]: time="2024-12-13T14:08:22.485156657Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:08:22.485542 env[1556]: time="2024-12-13T14:08:22.485501945Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/bf1e5912c98e419adced2eacc0c810fe4759c847dd9b2255ef7dd479ee788759 pid=2885 runtime=io.containerd.runc.v2 Dec 13 14:08:22.508489 env[1556]: time="2024-12-13T14:08:22.508440105Z" level=info msg="CreateContainer within sandbox \"dd33b12ad185080a84801f25f6b9f68027f8e449059dd34cca8b87c4e4d8861c\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"ab1946dd267e4f75cb22cb263a6deff098097a1b21e78dae4a78ad83fd0492f8\"" Dec 13 14:08:22.510298 env[1556]: time="2024-12-13T14:08:22.510241288Z" level=info msg="StartContainer for \"ab1946dd267e4f75cb22cb263a6deff098097a1b21e78dae4a78ad83fd0492f8\"" Dec 13 14:08:22.539778 env[1556]: time="2024-12-13T14:08:22.539736356Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5cc964979-thcrx,Uid:8e260ecb-657a-4bb9-bbd6-58568492ad11,Namespace:kube-system,Attempt:0,} returns sandbox id \"bf1e5912c98e419adced2eacc0c810fe4759c847dd9b2255ef7dd479ee788759\"" Dec 13 14:08:22.572970 env[1556]: time="2024-12-13T14:08:22.572918853Z" level=info msg="StartContainer for \"ab1946dd267e4f75cb22cb263a6deff098097a1b21e78dae4a78ad83fd0492f8\" returns successfully" Dec 13 14:08:23.418900 kubelet[2717]: I1213 14:08:23.418871 2717 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-8lm8p" podStartSLOduration=1.418824028 podStartE2EDuration="1.418824028s" podCreationTimestamp="2024-12-13 14:08:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 14:08:23.417624594 +0000 UTC m=+16.201600426" watchObservedRunningTime="2024-12-13 14:08:23.418824028 +0000 UTC m=+16.202799860" Dec 13 14:08:27.126028 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3406104379.mount: Deactivated successfully. Dec 13 14:08:29.390134 env[1556]: time="2024-12-13T14:08:29.390093738Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:08:29.398696 env[1556]: time="2024-12-13T14:08:29.398652778Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:08:29.403805 env[1556]: time="2024-12-13T14:08:29.403770181Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:08:29.404496 env[1556]: time="2024-12-13T14:08:29.404463212Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Dec 13 14:08:29.406467 env[1556]: time="2024-12-13T14:08:29.406373815Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Dec 13 14:08:29.407968 env[1556]: time="2024-12-13T14:08:29.407932164Z" level=info msg="CreateContainer within sandbox \"0915bd42418be238d8049f315d26b683dc4a89a1883cf1b1e8217a9167498a27\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Dec 13 14:08:29.437146 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount87084829.mount: Deactivated successfully. Dec 13 14:08:29.448208 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount848452934.mount: Deactivated successfully. Dec 13 14:08:29.461486 env[1556]: time="2024-12-13T14:08:29.461448041Z" level=info msg="CreateContainer within sandbox \"0915bd42418be238d8049f315d26b683dc4a89a1883cf1b1e8217a9167498a27\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"b57decaf99e35f7f70cf58521d5d813a4db85ecd76c08d2089079b1c68e08ef3\"" Dec 13 14:08:29.462030 env[1556]: time="2024-12-13T14:08:29.462007776Z" level=info msg="StartContainer for \"b57decaf99e35f7f70cf58521d5d813a4db85ecd76c08d2089079b1c68e08ef3\"" Dec 13 14:08:29.507242 env[1556]: time="2024-12-13T14:08:29.507203369Z" level=info msg="StartContainer for \"b57decaf99e35f7f70cf58521d5d813a4db85ecd76c08d2089079b1c68e08ef3\" returns successfully" Dec 13 14:08:30.435185 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b57decaf99e35f7f70cf58521d5d813a4db85ecd76c08d2089079b1c68e08ef3-rootfs.mount: Deactivated successfully. Dec 13 14:08:31.250736 env[1556]: time="2024-12-13T14:08:31.250668556Z" level=info msg="shim disconnected" id=b57decaf99e35f7f70cf58521d5d813a4db85ecd76c08d2089079b1c68e08ef3 Dec 13 14:08:31.250736 env[1556]: time="2024-12-13T14:08:31.250732905Z" level=warning msg="cleaning up after shim disconnected" id=b57decaf99e35f7f70cf58521d5d813a4db85ecd76c08d2089079b1c68e08ef3 namespace=k8s.io Dec 13 14:08:31.250736 env[1556]: time="2024-12-13T14:08:31.250742863Z" level=info msg="cleaning up dead shim" Dec 13 14:08:31.258439 env[1556]: time="2024-12-13T14:08:31.258390316Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:08:31Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3123 runtime=io.containerd.runc.v2\n" Dec 13 14:08:31.445441 env[1556]: time="2024-12-13T14:08:31.444125265Z" level=info msg="CreateContainer within sandbox \"0915bd42418be238d8049f315d26b683dc4a89a1883cf1b1e8217a9167498a27\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Dec 13 14:08:31.499919 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2919055848.mount: Deactivated successfully. Dec 13 14:08:31.514166 env[1556]: time="2024-12-13T14:08:31.514102892Z" level=info msg="CreateContainer within sandbox \"0915bd42418be238d8049f315d26b683dc4a89a1883cf1b1e8217a9167498a27\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"0169bf0781ac34b1e6261537efbbac2e97f3d01c9c25bb5a633a0ec14f242b9f\"" Dec 13 14:08:31.516027 env[1556]: time="2024-12-13T14:08:31.515999708Z" level=info msg="StartContainer for \"0169bf0781ac34b1e6261537efbbac2e97f3d01c9c25bb5a633a0ec14f242b9f\"" Dec 13 14:08:31.572517 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 13 14:08:31.575304 env[1556]: time="2024-12-13T14:08:31.573866572Z" level=info msg="StartContainer for \"0169bf0781ac34b1e6261537efbbac2e97f3d01c9c25bb5a633a0ec14f242b9f\" returns successfully" Dec 13 14:08:31.573050 systemd[1]: Stopped systemd-sysctl.service. Dec 13 14:08:31.573235 systemd[1]: Stopping systemd-sysctl.service... Dec 13 14:08:31.575755 systemd[1]: Starting systemd-sysctl.service... Dec 13 14:08:31.587915 systemd[1]: Finished systemd-sysctl.service. Dec 13 14:08:31.611388 env[1556]: time="2024-12-13T14:08:31.611340574Z" level=info msg="shim disconnected" id=0169bf0781ac34b1e6261537efbbac2e97f3d01c9c25bb5a633a0ec14f242b9f Dec 13 14:08:31.611623 env[1556]: time="2024-12-13T14:08:31.611607006Z" level=warning msg="cleaning up after shim disconnected" id=0169bf0781ac34b1e6261537efbbac2e97f3d01c9c25bb5a633a0ec14f242b9f namespace=k8s.io Dec 13 14:08:31.611704 env[1556]: time="2024-12-13T14:08:31.611691911Z" level=info msg="cleaning up dead shim" Dec 13 14:08:31.618198 env[1556]: time="2024-12-13T14:08:31.618162497Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:08:31Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3189 runtime=io.containerd.runc.v2\n" Dec 13 14:08:32.447487 env[1556]: time="2024-12-13T14:08:32.447407894Z" level=info msg="CreateContainer within sandbox \"0915bd42418be238d8049f315d26b683dc4a89a1883cf1b1e8217a9167498a27\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Dec 13 14:08:32.481863 env[1556]: time="2024-12-13T14:08:32.481775830Z" level=info msg="CreateContainer within sandbox \"0915bd42418be238d8049f315d26b683dc4a89a1883cf1b1e8217a9167498a27\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"06f3df048a40f4c97c2b7e6a3b15e4a32adda03c219acfbc0fad7ea4832975b6\"" Dec 13 14:08:32.482991 env[1556]: time="2024-12-13T14:08:32.482339849Z" level=info msg="StartContainer for \"06f3df048a40f4c97c2b7e6a3b15e4a32adda03c219acfbc0fad7ea4832975b6\"" Dec 13 14:08:32.494487 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0169bf0781ac34b1e6261537efbbac2e97f3d01c9c25bb5a633a0ec14f242b9f-rootfs.mount: Deactivated successfully. Dec 13 14:08:32.542156 env[1556]: time="2024-12-13T14:08:32.542108885Z" level=info msg="StartContainer for \"06f3df048a40f4c97c2b7e6a3b15e4a32adda03c219acfbc0fad7ea4832975b6\" returns successfully" Dec 13 14:08:32.556943 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-06f3df048a40f4c97c2b7e6a3b15e4a32adda03c219acfbc0fad7ea4832975b6-rootfs.mount: Deactivated successfully. Dec 13 14:08:32.565475 env[1556]: time="2024-12-13T14:08:32.565421198Z" level=info msg="shim disconnected" id=06f3df048a40f4c97c2b7e6a3b15e4a32adda03c219acfbc0fad7ea4832975b6 Dec 13 14:08:32.565475 env[1556]: time="2024-12-13T14:08:32.565471189Z" level=warning msg="cleaning up after shim disconnected" id=06f3df048a40f4c97c2b7e6a3b15e4a32adda03c219acfbc0fad7ea4832975b6 namespace=k8s.io Dec 13 14:08:32.565475 env[1556]: time="2024-12-13T14:08:32.565480187Z" level=info msg="cleaning up dead shim" Dec 13 14:08:32.572422 env[1556]: time="2024-12-13T14:08:32.572377594Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:08:32Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3250 runtime=io.containerd.runc.v2\n" Dec 13 14:08:33.316911 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount454358540.mount: Deactivated successfully. Dec 13 14:08:33.450107 env[1556]: time="2024-12-13T14:08:33.449247470Z" level=info msg="CreateContainer within sandbox \"0915bd42418be238d8049f315d26b683dc4a89a1883cf1b1e8217a9167498a27\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Dec 13 14:08:33.489040 env[1556]: time="2024-12-13T14:08:33.488991307Z" level=info msg="CreateContainer within sandbox \"0915bd42418be238d8049f315d26b683dc4a89a1883cf1b1e8217a9167498a27\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"53be5744bfea4b5d48ad8c371302fa81d12237e990b93fdc3ee0935dd7cee735\"" Dec 13 14:08:33.490867 env[1556]: time="2024-12-13T14:08:33.490302716Z" level=info msg="StartContainer for \"53be5744bfea4b5d48ad8c371302fa81d12237e990b93fdc3ee0935dd7cee735\"" Dec 13 14:08:33.546995 env[1556]: time="2024-12-13T14:08:33.546956533Z" level=info msg="StartContainer for \"53be5744bfea4b5d48ad8c371302fa81d12237e990b93fdc3ee0935dd7cee735\" returns successfully" Dec 13 14:08:33.565373 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-53be5744bfea4b5d48ad8c371302fa81d12237e990b93fdc3ee0935dd7cee735-rootfs.mount: Deactivated successfully. Dec 13 14:08:33.581500 env[1556]: time="2024-12-13T14:08:33.581193580Z" level=info msg="shim disconnected" id=53be5744bfea4b5d48ad8c371302fa81d12237e990b93fdc3ee0935dd7cee735 Dec 13 14:08:33.581500 env[1556]: time="2024-12-13T14:08:33.581238092Z" level=warning msg="cleaning up after shim disconnected" id=53be5744bfea4b5d48ad8c371302fa81d12237e990b93fdc3ee0935dd7cee735 namespace=k8s.io Dec 13 14:08:33.581500 env[1556]: time="2024-12-13T14:08:33.581246331Z" level=info msg="cleaning up dead shim" Dec 13 14:08:33.588605 env[1556]: time="2024-12-13T14:08:33.588550444Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:08:33Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3305 runtime=io.containerd.runc.v2\n" Dec 13 14:08:34.465306 env[1556]: time="2024-12-13T14:08:34.465261748Z" level=info msg="CreateContainer within sandbox \"0915bd42418be238d8049f315d26b683dc4a89a1883cf1b1e8217a9167498a27\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Dec 13 14:08:34.503977 env[1556]: time="2024-12-13T14:08:34.503928551Z" level=info msg="CreateContainer within sandbox \"0915bd42418be238d8049f315d26b683dc4a89a1883cf1b1e8217a9167498a27\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"e04459db4381b445bad3feec26327727b3d49ee3c1c549c2109222ea3a7b1454\"" Dec 13 14:08:34.506047 env[1556]: time="2024-12-13T14:08:34.505984433Z" level=info msg="StartContainer for \"e04459db4381b445bad3feec26327727b3d49ee3c1c549c2109222ea3a7b1454\"" Dec 13 14:08:34.601060 env[1556]: time="2024-12-13T14:08:34.601020682Z" level=info msg="StartContainer for \"e04459db4381b445bad3feec26327727b3d49ee3c1c549c2109222ea3a7b1454\" returns successfully" Dec 13 14:08:34.649779 systemd[1]: run-containerd-runc-k8s.io-e04459db4381b445bad3feec26327727b3d49ee3c1c549c2109222ea3a7b1454-runc.06IGqQ.mount: Deactivated successfully. Dec 13 14:08:34.665102 kernel: WARNING: Unprivileged eBPF is enabled, data leaks possible via Spectre v2 BHB attacks! Dec 13 14:08:34.808527 kubelet[2717]: I1213 14:08:34.806258 2717 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Dec 13 14:08:34.860931 kubelet[2717]: I1213 14:08:34.860122 2717 topology_manager.go:215] "Topology Admit Handler" podUID="35a8f5df-2a4a-4737-b740-cf34c14cb63e" podNamespace="kube-system" podName="coredns-76f75df574-xd94t" Dec 13 14:08:34.865987 kubelet[2717]: I1213 14:08:34.863978 2717 topology_manager.go:215] "Topology Admit Handler" podUID="dd65ee0e-d2aa-4d32-9f34-1b5a4fd6ca58" podNamespace="kube-system" podName="coredns-76f75df574-hcgrk" Dec 13 14:08:34.875728 env[1556]: time="2024-12-13T14:08:34.875668486Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:08:34.884192 env[1556]: time="2024-12-13T14:08:34.884155492Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:08:34.888109 env[1556]: time="2024-12-13T14:08:34.888057174Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:08:34.888763 env[1556]: time="2024-12-13T14:08:34.888433428Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Dec 13 14:08:34.890768 env[1556]: time="2024-12-13T14:08:34.890712792Z" level=info msg="CreateContainer within sandbox \"bf1e5912c98e419adced2eacc0c810fe4759c847dd9b2255ef7dd479ee788759\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Dec 13 14:08:34.920527 env[1556]: time="2024-12-13T14:08:34.920482780Z" level=info msg="CreateContainer within sandbox \"bf1e5912c98e419adced2eacc0c810fe4759c847dd9b2255ef7dd479ee788759\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"aaa0bef837d67488332171fc045296ea5b6a5c1d5f3b2c779399f954bc0f61db\"" Dec 13 14:08:34.921350 env[1556]: time="2024-12-13T14:08:34.921319075Z" level=info msg="StartContainer for \"aaa0bef837d67488332171fc045296ea5b6a5c1d5f3b2c779399f954bc0f61db\"" Dec 13 14:08:34.969086 env[1556]: time="2024-12-13T14:08:34.969026826Z" level=info msg="StartContainer for \"aaa0bef837d67488332171fc045296ea5b6a5c1d5f3b2c779399f954bc0f61db\" returns successfully" Dec 13 14:08:34.975980 kubelet[2717]: I1213 14:08:34.975907 2717 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cbvjl\" (UniqueName: \"kubernetes.io/projected/dd65ee0e-d2aa-4d32-9f34-1b5a4fd6ca58-kube-api-access-cbvjl\") pod \"coredns-76f75df574-hcgrk\" (UID: \"dd65ee0e-d2aa-4d32-9f34-1b5a4fd6ca58\") " pod="kube-system/coredns-76f75df574-hcgrk" Dec 13 14:08:34.975980 kubelet[2717]: I1213 14:08:34.975955 2717 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vnxhm\" (UniqueName: \"kubernetes.io/projected/35a8f5df-2a4a-4737-b740-cf34c14cb63e-kube-api-access-vnxhm\") pod \"coredns-76f75df574-xd94t\" (UID: \"35a8f5df-2a4a-4737-b740-cf34c14cb63e\") " pod="kube-system/coredns-76f75df574-xd94t" Dec 13 14:08:34.976175 kubelet[2717]: I1213 14:08:34.976035 2717 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/35a8f5df-2a4a-4737-b740-cf34c14cb63e-config-volume\") pod \"coredns-76f75df574-xd94t\" (UID: \"35a8f5df-2a4a-4737-b740-cf34c14cb63e\") " pod="kube-system/coredns-76f75df574-xd94t" Dec 13 14:08:34.976175 kubelet[2717]: I1213 14:08:34.976084 2717 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/dd65ee0e-d2aa-4d32-9f34-1b5a4fd6ca58-config-volume\") pod \"coredns-76f75df574-hcgrk\" (UID: \"dd65ee0e-d2aa-4d32-9f34-1b5a4fd6ca58\") " pod="kube-system/coredns-76f75df574-hcgrk" Dec 13 14:08:35.150094 kernel: WARNING: Unprivileged eBPF is enabled, data leaks possible via Spectre v2 BHB attacks! Dec 13 14:08:35.162977 env[1556]: time="2024-12-13T14:08:35.162937244Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-xd94t,Uid:35a8f5df-2a4a-4737-b740-cf34c14cb63e,Namespace:kube-system,Attempt:0,}" Dec 13 14:08:35.168876 env[1556]: time="2024-12-13T14:08:35.168842553Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-hcgrk,Uid:dd65ee0e-d2aa-4d32-9f34-1b5a4fd6ca58,Namespace:kube-system,Attempt:0,}" Dec 13 14:08:35.557126 kubelet[2717]: I1213 14:08:35.557087 2717 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-operator-5cc964979-thcrx" podStartSLOduration=1.208605003 podStartE2EDuration="13.557034439s" podCreationTimestamp="2024-12-13 14:08:22 +0000 UTC" firstStartedPulling="2024-12-13 14:08:22.540804293 +0000 UTC m=+15.324780125" lastFinishedPulling="2024-12-13 14:08:34.889233729 +0000 UTC m=+27.673209561" observedRunningTime="2024-12-13 14:08:35.491504787 +0000 UTC m=+28.275480619" watchObservedRunningTime="2024-12-13 14:08:35.557034439 +0000 UTC m=+28.341010271" Dec 13 14:08:38.854443 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_net: link becomes ready Dec 13 14:08:38.854622 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready Dec 13 14:08:38.854161 systemd-networkd[1743]: cilium_host: Link UP Dec 13 14:08:38.856247 systemd-networkd[1743]: cilium_net: Link UP Dec 13 14:08:38.856523 systemd-networkd[1743]: cilium_net: Gained carrier Dec 13 14:08:38.857392 systemd-networkd[1743]: cilium_host: Gained carrier Dec 13 14:08:38.957433 systemd-networkd[1743]: cilium_vxlan: Link UP Dec 13 14:08:38.957440 systemd-networkd[1743]: cilium_vxlan: Gained carrier Dec 13 14:08:39.170105 kernel: NET: Registered PF_ALG protocol family Dec 13 14:08:39.360223 systemd-networkd[1743]: cilium_net: Gained IPv6LL Dec 13 14:08:39.537206 systemd-networkd[1743]: cilium_host: Gained IPv6LL Dec 13 14:08:39.798158 systemd-networkd[1743]: lxc_health: Link UP Dec 13 14:08:39.808248 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Dec 13 14:08:39.808282 systemd-networkd[1743]: lxc_health: Gained carrier Dec 13 14:08:40.251921 systemd-networkd[1743]: lxc27d6a1f6e430: Link UP Dec 13 14:08:40.261127 kernel: eth0: renamed from tmp3f031 Dec 13 14:08:40.285052 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Dec 13 14:08:40.285184 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc27d6a1f6e430: link becomes ready Dec 13 14:08:40.281108 systemd-networkd[1743]: lxc27d6a1f6e430: Gained carrier Dec 13 14:08:40.282776 systemd-networkd[1743]: lxc22f53345f0cf: Link UP Dec 13 14:08:40.297105 kernel: eth0: renamed from tmp8d1a2 Dec 13 14:08:40.310107 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc22f53345f0cf: link becomes ready Dec 13 14:08:40.310135 systemd-networkd[1743]: lxc22f53345f0cf: Gained carrier Dec 13 14:08:40.368244 systemd-networkd[1743]: cilium_vxlan: Gained IPv6LL Dec 13 14:08:40.394368 kubelet[2717]: I1213 14:08:40.394332 2717 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-kmlcf" podStartSLOduration=11.434939323 podStartE2EDuration="18.3942914s" podCreationTimestamp="2024-12-13 14:08:22 +0000 UTC" firstStartedPulling="2024-12-13 14:08:22.446075275 +0000 UTC m=+15.230051107" lastFinishedPulling="2024-12-13 14:08:29.405427272 +0000 UTC m=+22.189403184" observedRunningTime="2024-12-13 14:08:35.558180522 +0000 UTC m=+28.342156354" watchObservedRunningTime="2024-12-13 14:08:40.3942914 +0000 UTC m=+33.178267232" Dec 13 14:08:41.585289 systemd-networkd[1743]: lxc_health: Gained IPv6LL Dec 13 14:08:41.969261 systemd-networkd[1743]: lxc27d6a1f6e430: Gained IPv6LL Dec 13 14:08:42.353278 systemd-networkd[1743]: lxc22f53345f0cf: Gained IPv6LL Dec 13 14:08:43.783883 env[1556]: time="2024-12-13T14:08:43.783764763Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:08:43.784226 env[1556]: time="2024-12-13T14:08:43.783886264Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:08:43.784226 env[1556]: time="2024-12-13T14:08:43.783938536Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:08:43.784226 env[1556]: time="2024-12-13T14:08:43.784119588Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/8d1a28defbd63e9b0aac0f3109b97933aa1257a2222345c121c3cbcc094d83b9 pid=3879 runtime=io.containerd.runc.v2 Dec 13 14:08:43.797262 env[1556]: time="2024-12-13T14:08:43.797165010Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:08:43.797379 env[1556]: time="2024-12-13T14:08:43.797265354Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:08:43.797379 env[1556]: time="2024-12-13T14:08:43.797290910Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:08:43.800378 env[1556]: time="2024-12-13T14:08:43.800316082Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/3f0312fc3c930d96952f862bd2c9e97c10aff89175922b18fa6a2213303157fc pid=3898 runtime=io.containerd.runc.v2 Dec 13 14:08:43.819626 systemd[1]: run-containerd-runc-k8s.io-8d1a28defbd63e9b0aac0f3109b97933aa1257a2222345c121c3cbcc094d83b9-runc.iUL6jz.mount: Deactivated successfully. Dec 13 14:08:43.897722 env[1556]: time="2024-12-13T14:08:43.896489444Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-hcgrk,Uid:dd65ee0e-d2aa-4d32-9f34-1b5a4fd6ca58,Namespace:kube-system,Attempt:0,} returns sandbox id \"8d1a28defbd63e9b0aac0f3109b97933aa1257a2222345c121c3cbcc094d83b9\"" Dec 13 14:08:43.899898 env[1556]: time="2024-12-13T14:08:43.899762778Z" level=info msg="CreateContainer within sandbox \"8d1a28defbd63e9b0aac0f3109b97933aa1257a2222345c121c3cbcc094d83b9\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 13 14:08:43.924026 env[1556]: time="2024-12-13T14:08:43.923982951Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-xd94t,Uid:35a8f5df-2a4a-4737-b740-cf34c14cb63e,Namespace:kube-system,Attempt:0,} returns sandbox id \"3f0312fc3c930d96952f862bd2c9e97c10aff89175922b18fa6a2213303157fc\"" Dec 13 14:08:43.928063 env[1556]: time="2024-12-13T14:08:43.928024845Z" level=info msg="CreateContainer within sandbox \"3f0312fc3c930d96952f862bd2c9e97c10aff89175922b18fa6a2213303157fc\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 13 14:08:43.953277 env[1556]: time="2024-12-13T14:08:43.953235785Z" level=info msg="CreateContainer within sandbox \"8d1a28defbd63e9b0aac0f3109b97933aa1257a2222345c121c3cbcc094d83b9\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"94c11de51fa9b7087e63a6141f9d198d3cef581e65b8e13e877481d322f5e30c\"" Dec 13 14:08:43.955178 env[1556]: time="2024-12-13T14:08:43.954128487Z" level=info msg="StartContainer for \"94c11de51fa9b7087e63a6141f9d198d3cef581e65b8e13e877481d322f5e30c\"" Dec 13 14:08:43.970019 env[1556]: time="2024-12-13T14:08:43.969966597Z" level=info msg="CreateContainer within sandbox \"3f0312fc3c930d96952f862bd2c9e97c10aff89175922b18fa6a2213303157fc\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"0a2d52de30be78567ee6c5cedd6a228d054824138a71ca845678d82d00c27441\"" Dec 13 14:08:43.970828 env[1556]: time="2024-12-13T14:08:43.970801748Z" level=info msg="StartContainer for \"0a2d52de30be78567ee6c5cedd6a228d054824138a71ca845678d82d00c27441\"" Dec 13 14:08:44.024713 env[1556]: time="2024-12-13T14:08:44.024658458Z" level=info msg="StartContainer for \"94c11de51fa9b7087e63a6141f9d198d3cef581e65b8e13e877481d322f5e30c\" returns successfully" Dec 13 14:08:44.050904 env[1556]: time="2024-12-13T14:08:44.050781664Z" level=info msg="StartContainer for \"0a2d52de30be78567ee6c5cedd6a228d054824138a71ca845678d82d00c27441\" returns successfully" Dec 13 14:08:44.498297 kubelet[2717]: I1213 14:08:44.497854 2717 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-hcgrk" podStartSLOduration=22.497814031 podStartE2EDuration="22.497814031s" podCreationTimestamp="2024-12-13 14:08:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 14:08:44.48571968 +0000 UTC m=+37.269695512" watchObservedRunningTime="2024-12-13 14:08:44.497814031 +0000 UTC m=+37.281789863" Dec 13 14:08:44.519089 kubelet[2717]: I1213 14:08:44.519028 2717 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-xd94t" podStartSLOduration=22.518985634 podStartE2EDuration="22.518985634s" podCreationTimestamp="2024-12-13 14:08:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 14:08:44.498780283 +0000 UTC m=+37.282756115" watchObservedRunningTime="2024-12-13 14:08:44.518985634 +0000 UTC m=+37.302961466" Dec 13 14:08:44.789422 systemd[1]: run-containerd-runc-k8s.io-3f0312fc3c930d96952f862bd2c9e97c10aff89175922b18fa6a2213303157fc-runc.j8HTFX.mount: Deactivated successfully. Dec 13 14:10:34.309553 systemd[1]: Started sshd@5-10.200.20.41:22-10.200.16.10:48686.service. Dec 13 14:10:34.747575 sshd[4049]: Accepted publickey for core from 10.200.16.10 port 48686 ssh2: RSA SHA256:xuCpWY3jYETt01AJgPfaKRWNP61F/EGSdrGlXn/pObI Dec 13 14:10:34.748446 sshd[4049]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:10:34.752840 systemd[1]: Started session-8.scope. Dec 13 14:10:34.753952 systemd-logind[1542]: New session 8 of user core. Dec 13 14:10:35.159513 sshd[4049]: pam_unix(sshd:session): session closed for user core Dec 13 14:10:35.162249 systemd[1]: sshd@5-10.200.20.41:22-10.200.16.10:48686.service: Deactivated successfully. Dec 13 14:10:35.162975 systemd-logind[1542]: Session 8 logged out. Waiting for processes to exit. Dec 13 14:10:35.164463 systemd[1]: session-8.scope: Deactivated successfully. Dec 13 14:10:35.165115 systemd-logind[1542]: Removed session 8. Dec 13 14:10:40.232884 systemd[1]: Started sshd@6-10.200.20.41:22-10.200.16.10:48894.service. Dec 13 14:10:40.679782 sshd[4063]: Accepted publickey for core from 10.200.16.10 port 48894 ssh2: RSA SHA256:xuCpWY3jYETt01AJgPfaKRWNP61F/EGSdrGlXn/pObI Dec 13 14:10:40.681508 sshd[4063]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:10:40.686063 systemd[1]: Started session-9.scope. Dec 13 14:10:40.686272 systemd-logind[1542]: New session 9 of user core. Dec 13 14:10:41.077826 sshd[4063]: pam_unix(sshd:session): session closed for user core Dec 13 14:10:41.080854 systemd[1]: sshd@6-10.200.20.41:22-10.200.16.10:48894.service: Deactivated successfully. Dec 13 14:10:41.081670 systemd[1]: session-9.scope: Deactivated successfully. Dec 13 14:10:41.082764 systemd-logind[1542]: Session 9 logged out. Waiting for processes to exit. Dec 13 14:10:41.083536 systemd-logind[1542]: Removed session 9. Dec 13 14:10:46.151817 systemd[1]: Started sshd@7-10.200.20.41:22-10.200.16.10:48896.service. Dec 13 14:10:46.596310 sshd[4078]: Accepted publickey for core from 10.200.16.10 port 48896 ssh2: RSA SHA256:xuCpWY3jYETt01AJgPfaKRWNP61F/EGSdrGlXn/pObI Dec 13 14:10:46.597982 sshd[4078]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:10:46.602283 systemd[1]: Started session-10.scope. Dec 13 14:10:46.602467 systemd-logind[1542]: New session 10 of user core. Dec 13 14:10:46.993329 sshd[4078]: pam_unix(sshd:session): session closed for user core Dec 13 14:10:46.996346 systemd[1]: sshd@7-10.200.20.41:22-10.200.16.10:48896.service: Deactivated successfully. Dec 13 14:10:46.997910 systemd[1]: session-10.scope: Deactivated successfully. Dec 13 14:10:46.998610 systemd-logind[1542]: Session 10 logged out. Waiting for processes to exit. Dec 13 14:10:46.999539 systemd-logind[1542]: Removed session 10. Dec 13 14:10:52.065232 systemd[1]: Started sshd@8-10.200.20.41:22-10.200.16.10:47218.service. Dec 13 14:10:52.503189 sshd[4092]: Accepted publickey for core from 10.200.16.10 port 47218 ssh2: RSA SHA256:xuCpWY3jYETt01AJgPfaKRWNP61F/EGSdrGlXn/pObI Dec 13 14:10:52.504782 sshd[4092]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:10:52.509120 systemd[1]: Started session-11.scope. Dec 13 14:10:52.509348 systemd-logind[1542]: New session 11 of user core. Dec 13 14:10:52.895289 sshd[4092]: pam_unix(sshd:session): session closed for user core Dec 13 14:10:52.897907 systemd[1]: sshd@8-10.200.20.41:22-10.200.16.10:47218.service: Deactivated successfully. Dec 13 14:10:52.899279 systemd[1]: session-11.scope: Deactivated successfully. Dec 13 14:10:52.899785 systemd-logind[1542]: Session 11 logged out. Waiting for processes to exit. Dec 13 14:10:52.900620 systemd-logind[1542]: Removed session 11. Dec 13 14:10:57.966706 systemd[1]: Started sshd@9-10.200.20.41:22-10.200.16.10:47228.service. Dec 13 14:10:58.399295 sshd[4109]: Accepted publickey for core from 10.200.16.10 port 47228 ssh2: RSA SHA256:xuCpWY3jYETt01AJgPfaKRWNP61F/EGSdrGlXn/pObI Dec 13 14:10:58.400909 sshd[4109]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:10:58.405313 systemd[1]: Started session-12.scope. Dec 13 14:10:58.406392 systemd-logind[1542]: New session 12 of user core. Dec 13 14:10:58.784870 sshd[4109]: pam_unix(sshd:session): session closed for user core Dec 13 14:10:58.787729 systemd[1]: sshd@9-10.200.20.41:22-10.200.16.10:47228.service: Deactivated successfully. Dec 13 14:10:58.788130 systemd-logind[1542]: Session 12 logged out. Waiting for processes to exit. Dec 13 14:10:58.788531 systemd[1]: session-12.scope: Deactivated successfully. Dec 13 14:10:58.789515 systemd-logind[1542]: Removed session 12. Dec 13 14:10:58.855891 systemd[1]: Started sshd@10-10.200.20.41:22-10.200.16.10:53144.service. Dec 13 14:10:59.290233 sshd[4123]: Accepted publickey for core from 10.200.16.10 port 53144 ssh2: RSA SHA256:xuCpWY3jYETt01AJgPfaKRWNP61F/EGSdrGlXn/pObI Dec 13 14:10:59.291865 sshd[4123]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:10:59.295898 systemd-logind[1542]: New session 13 of user core. Dec 13 14:10:59.296213 systemd[1]: Started session-13.scope. Dec 13 14:10:59.760776 sshd[4123]: pam_unix(sshd:session): session closed for user core Dec 13 14:10:59.763225 systemd-logind[1542]: Session 13 logged out. Waiting for processes to exit. Dec 13 14:10:59.763438 systemd[1]: sshd@10-10.200.20.41:22-10.200.16.10:53144.service: Deactivated successfully. Dec 13 14:10:59.764246 systemd[1]: session-13.scope: Deactivated successfully. Dec 13 14:10:59.764664 systemd-logind[1542]: Removed session 13. Dec 13 14:10:59.829955 systemd[1]: Started sshd@11-10.200.20.41:22-10.200.16.10:53148.service. Dec 13 14:11:00.255926 sshd[4134]: Accepted publickey for core from 10.200.16.10 port 53148 ssh2: RSA SHA256:xuCpWY3jYETt01AJgPfaKRWNP61F/EGSdrGlXn/pObI Dec 13 14:11:00.259222 sshd[4134]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:11:00.263749 systemd-logind[1542]: New session 14 of user core. Dec 13 14:11:00.263980 systemd[1]: Started session-14.scope. Dec 13 14:11:00.650048 sshd[4134]: pam_unix(sshd:session): session closed for user core Dec 13 14:11:00.653113 systemd-logind[1542]: Session 14 logged out. Waiting for processes to exit. Dec 13 14:11:00.653343 systemd[1]: sshd@11-10.200.20.41:22-10.200.16.10:53148.service: Deactivated successfully. Dec 13 14:11:00.654141 systemd[1]: session-14.scope: Deactivated successfully. Dec 13 14:11:00.654572 systemd-logind[1542]: Removed session 14. Dec 13 14:11:05.722833 systemd[1]: Started sshd@12-10.200.20.41:22-10.200.16.10:53162.service. Dec 13 14:11:06.168437 sshd[4148]: Accepted publickey for core from 10.200.16.10 port 53162 ssh2: RSA SHA256:xuCpWY3jYETt01AJgPfaKRWNP61F/EGSdrGlXn/pObI Dec 13 14:11:06.169714 sshd[4148]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:11:06.174427 systemd-logind[1542]: New session 15 of user core. Dec 13 14:11:06.174850 systemd[1]: Started session-15.scope. Dec 13 14:11:06.573369 sshd[4148]: pam_unix(sshd:session): session closed for user core Dec 13 14:11:06.575717 systemd-logind[1542]: Session 15 logged out. Waiting for processes to exit. Dec 13 14:11:06.575929 systemd[1]: sshd@12-10.200.20.41:22-10.200.16.10:53162.service: Deactivated successfully. Dec 13 14:11:06.576740 systemd[1]: session-15.scope: Deactivated successfully. Dec 13 14:11:06.577190 systemd-logind[1542]: Removed session 15. Dec 13 14:11:06.641938 systemd[1]: Started sshd@13-10.200.20.41:22-10.200.16.10:53172.service. Dec 13 14:11:07.069879 sshd[4164]: Accepted publickey for core from 10.200.16.10 port 53172 ssh2: RSA SHA256:xuCpWY3jYETt01AJgPfaKRWNP61F/EGSdrGlXn/pObI Dec 13 14:11:07.070664 sshd[4164]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:11:07.077138 systemd-logind[1542]: New session 16 of user core. Dec 13 14:11:07.078178 systemd[1]: Started session-16.scope. Dec 13 14:11:07.519296 sshd[4164]: pam_unix(sshd:session): session closed for user core Dec 13 14:11:07.521528 systemd[1]: sshd@13-10.200.20.41:22-10.200.16.10:53172.service: Deactivated successfully. Dec 13 14:11:07.522509 systemd[1]: session-16.scope: Deactivated successfully. Dec 13 14:11:07.522528 systemd-logind[1542]: Session 16 logged out. Waiting for processes to exit. Dec 13 14:11:07.523553 systemd-logind[1542]: Removed session 16. Dec 13 14:11:07.587859 systemd[1]: Started sshd@14-10.200.20.41:22-10.200.16.10:53174.service. Dec 13 14:11:08.013389 sshd[4176]: Accepted publickey for core from 10.200.16.10 port 53174 ssh2: RSA SHA256:xuCpWY3jYETt01AJgPfaKRWNP61F/EGSdrGlXn/pObI Dec 13 14:11:08.014997 sshd[4176]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:11:08.018993 systemd-logind[1542]: New session 17 of user core. Dec 13 14:11:08.019279 systemd[1]: Started session-17.scope. Dec 13 14:11:09.570492 sshd[4176]: pam_unix(sshd:session): session closed for user core Dec 13 14:11:09.572944 systemd-logind[1542]: Session 17 logged out. Waiting for processes to exit. Dec 13 14:11:09.573104 systemd[1]: sshd@14-10.200.20.41:22-10.200.16.10:53174.service: Deactivated successfully. Dec 13 14:11:09.573890 systemd[1]: session-17.scope: Deactivated successfully. Dec 13 14:11:09.574333 systemd-logind[1542]: Removed session 17. Dec 13 14:11:09.639568 systemd[1]: Started sshd@15-10.200.20.41:22-10.200.16.10:47758.service. Dec 13 14:11:10.073608 sshd[4194]: Accepted publickey for core from 10.200.16.10 port 47758 ssh2: RSA SHA256:xuCpWY3jYETt01AJgPfaKRWNP61F/EGSdrGlXn/pObI Dec 13 14:11:10.075204 sshd[4194]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:11:10.079791 systemd[1]: Started session-18.scope. Dec 13 14:11:10.080185 systemd-logind[1542]: New session 18 of user core. Dec 13 14:11:10.575474 sshd[4194]: pam_unix(sshd:session): session closed for user core Dec 13 14:11:10.578059 systemd[1]: sshd@15-10.200.20.41:22-10.200.16.10:47758.service: Deactivated successfully. Dec 13 14:11:10.578814 systemd[1]: session-18.scope: Deactivated successfully. Dec 13 14:11:10.579218 systemd-logind[1542]: Session 18 logged out. Waiting for processes to exit. Dec 13 14:11:10.580396 systemd-logind[1542]: Removed session 18. Dec 13 14:11:10.647919 systemd[1]: Started sshd@16-10.200.20.41:22-10.200.16.10:47770.service. Dec 13 14:11:11.092103 sshd[4205]: Accepted publickey for core from 10.200.16.10 port 47770 ssh2: RSA SHA256:xuCpWY3jYETt01AJgPfaKRWNP61F/EGSdrGlXn/pObI Dec 13 14:11:11.093761 sshd[4205]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:11:11.098015 systemd[1]: Started session-19.scope. Dec 13 14:11:11.098538 systemd-logind[1542]: New session 19 of user core. Dec 13 14:11:11.489440 sshd[4205]: pam_unix(sshd:session): session closed for user core Dec 13 14:11:11.492063 systemd-logind[1542]: Session 19 logged out. Waiting for processes to exit. Dec 13 14:11:11.492216 systemd[1]: sshd@16-10.200.20.41:22-10.200.16.10:47770.service: Deactivated successfully. Dec 13 14:11:11.493017 systemd[1]: session-19.scope: Deactivated successfully. Dec 13 14:11:11.493471 systemd-logind[1542]: Removed session 19. Dec 13 14:11:16.558945 systemd[1]: Started sshd@17-10.200.20.41:22-10.200.16.10:47782.service. Dec 13 14:11:16.984514 sshd[4221]: Accepted publickey for core from 10.200.16.10 port 47782 ssh2: RSA SHA256:xuCpWY3jYETt01AJgPfaKRWNP61F/EGSdrGlXn/pObI Dec 13 14:11:16.986176 sshd[4221]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:11:16.990515 systemd[1]: Started session-20.scope. Dec 13 14:11:16.990715 systemd-logind[1542]: New session 20 of user core. Dec 13 14:11:17.380278 sshd[4221]: pam_unix(sshd:session): session closed for user core Dec 13 14:11:17.382801 systemd-logind[1542]: Session 20 logged out. Waiting for processes to exit. Dec 13 14:11:17.383510 systemd[1]: sshd@17-10.200.20.41:22-10.200.16.10:47782.service: Deactivated successfully. Dec 13 14:11:17.384673 systemd[1]: session-20.scope: Deactivated successfully. Dec 13 14:11:17.387766 systemd-logind[1542]: Removed session 20. Dec 13 14:11:22.452810 systemd[1]: Started sshd@18-10.200.20.41:22-10.200.16.10:46094.service. Dec 13 14:11:22.898801 sshd[4235]: Accepted publickey for core from 10.200.16.10 port 46094 ssh2: RSA SHA256:xuCpWY3jYETt01AJgPfaKRWNP61F/EGSdrGlXn/pObI Dec 13 14:11:22.900172 sshd[4235]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:11:22.904913 systemd[1]: Started session-21.scope. Dec 13 14:11:22.905245 systemd-logind[1542]: New session 21 of user core. Dec 13 14:11:23.291030 sshd[4235]: pam_unix(sshd:session): session closed for user core Dec 13 14:11:23.294495 systemd-logind[1542]: Session 21 logged out. Waiting for processes to exit. Dec 13 14:11:23.294624 systemd[1]: sshd@18-10.200.20.41:22-10.200.16.10:46094.service: Deactivated successfully. Dec 13 14:11:23.295623 systemd[1]: session-21.scope: Deactivated successfully. Dec 13 14:11:23.296227 systemd-logind[1542]: Removed session 21. Dec 13 14:11:28.363222 systemd[1]: Started sshd@19-10.200.20.41:22-10.200.16.10:46098.service. Dec 13 14:11:28.800679 sshd[4250]: Accepted publickey for core from 10.200.16.10 port 46098 ssh2: RSA SHA256:xuCpWY3jYETt01AJgPfaKRWNP61F/EGSdrGlXn/pObI Dec 13 14:11:28.801903 sshd[4250]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:11:28.806482 systemd[1]: Started session-22.scope. Dec 13 14:11:28.806676 systemd-logind[1542]: New session 22 of user core. Dec 13 14:11:29.183929 sshd[4250]: pam_unix(sshd:session): session closed for user core Dec 13 14:11:29.186858 systemd[1]: sshd@19-10.200.20.41:22-10.200.16.10:46098.service: Deactivated successfully. Dec 13 14:11:29.187643 systemd[1]: session-22.scope: Deactivated successfully. Dec 13 14:11:29.188038 systemd-logind[1542]: Session 22 logged out. Waiting for processes to exit. Dec 13 14:11:29.188670 systemd-logind[1542]: Removed session 22. Dec 13 14:11:29.256322 systemd[1]: Started sshd@20-10.200.20.41:22-10.200.16.10:38856.service. Dec 13 14:11:29.703473 sshd[4263]: Accepted publickey for core from 10.200.16.10 port 38856 ssh2: RSA SHA256:xuCpWY3jYETt01AJgPfaKRWNP61F/EGSdrGlXn/pObI Dec 13 14:11:29.705240 sshd[4263]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:11:29.709786 systemd[1]: Started session-23.scope. Dec 13 14:11:29.710137 systemd-logind[1542]: New session 23 of user core. Dec 13 14:11:32.264696 systemd[1]: run-containerd-runc-k8s.io-e04459db4381b445bad3feec26327727b3d49ee3c1c549c2109222ea3a7b1454-runc.HJeZen.mount: Deactivated successfully. Dec 13 14:11:32.288537 env[1556]: time="2024-12-13T14:11:32.285947633Z" level=info msg="StopContainer for \"aaa0bef837d67488332171fc045296ea5b6a5c1d5f3b2c779399f954bc0f61db\" with timeout 30 (s)" Dec 13 14:11:32.288537 env[1556]: time="2024-12-13T14:11:32.286232759Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 13 14:11:32.289413 env[1556]: time="2024-12-13T14:11:32.289183094Z" level=info msg="Stop container \"aaa0bef837d67488332171fc045296ea5b6a5c1d5f3b2c779399f954bc0f61db\" with signal terminated" Dec 13 14:11:32.292344 env[1556]: time="2024-12-13T14:11:32.292316913Z" level=info msg="StopContainer for \"e04459db4381b445bad3feec26327727b3d49ee3c1c549c2109222ea3a7b1454\" with timeout 2 (s)" Dec 13 14:11:32.292920 env[1556]: time="2024-12-13T14:11:32.292881084Z" level=info msg="Stop container \"e04459db4381b445bad3feec26327727b3d49ee3c1c549c2109222ea3a7b1454\" with signal terminated" Dec 13 14:11:32.305189 systemd-networkd[1743]: lxc_health: Link DOWN Dec 13 14:11:32.305201 systemd-networkd[1743]: lxc_health: Lost carrier Dec 13 14:11:32.326100 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-aaa0bef837d67488332171fc045296ea5b6a5c1d5f3b2c779399f954bc0f61db-rootfs.mount: Deactivated successfully. Dec 13 14:11:32.347300 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e04459db4381b445bad3feec26327727b3d49ee3c1c549c2109222ea3a7b1454-rootfs.mount: Deactivated successfully. Dec 13 14:11:32.403977 env[1556]: time="2024-12-13T14:11:32.403931775Z" level=info msg="shim disconnected" id=e04459db4381b445bad3feec26327727b3d49ee3c1c549c2109222ea3a7b1454 Dec 13 14:11:32.404245 env[1556]: time="2024-12-13T14:11:32.404224821Z" level=warning msg="cleaning up after shim disconnected" id=e04459db4381b445bad3feec26327727b3d49ee3c1c549c2109222ea3a7b1454 namespace=k8s.io Dec 13 14:11:32.404332 env[1556]: time="2024-12-13T14:11:32.404317663Z" level=info msg="cleaning up dead shim" Dec 13 14:11:32.404480 env[1556]: time="2024-12-13T14:11:32.404108779Z" level=info msg="shim disconnected" id=aaa0bef837d67488332171fc045296ea5b6a5c1d5f3b2c779399f954bc0f61db Dec 13 14:11:32.404480 env[1556]: time="2024-12-13T14:11:32.404475786Z" level=warning msg="cleaning up after shim disconnected" id=aaa0bef837d67488332171fc045296ea5b6a5c1d5f3b2c779399f954bc0f61db namespace=k8s.io Dec 13 14:11:32.404548 env[1556]: time="2024-12-13T14:11:32.404484826Z" level=info msg="cleaning up dead shim" Dec 13 14:11:32.413496 env[1556]: time="2024-12-13T14:11:32.413438595Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:11:32Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4333 runtime=io.containerd.runc.v2\n" Dec 13 14:11:32.416345 env[1556]: time="2024-12-13T14:11:32.416265728Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:11:32Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4332 runtime=io.containerd.runc.v2\n" Dec 13 14:11:32.417533 env[1556]: time="2024-12-13T14:11:32.417502991Z" level=info msg="StopContainer for \"e04459db4381b445bad3feec26327727b3d49ee3c1c549c2109222ea3a7b1454\" returns successfully" Dec 13 14:11:32.418266 env[1556]: time="2024-12-13T14:11:32.418239405Z" level=info msg="StopPodSandbox for \"0915bd42418be238d8049f315d26b683dc4a89a1883cf1b1e8217a9167498a27\"" Dec 13 14:11:32.421333 env[1556]: time="2024-12-13T14:11:32.418454409Z" level=info msg="Container to stop \"e04459db4381b445bad3feec26327727b3d49ee3c1c549c2109222ea3a7b1454\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 14:11:32.421333 env[1556]: time="2024-12-13T14:11:32.418475849Z" level=info msg="Container to stop \"b57decaf99e35f7f70cf58521d5d813a4db85ecd76c08d2089079b1c68e08ef3\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 14:11:32.421333 env[1556]: time="2024-12-13T14:11:32.418486690Z" level=info msg="Container to stop \"0169bf0781ac34b1e6261537efbbac2e97f3d01c9c25bb5a633a0ec14f242b9f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 14:11:32.421333 env[1556]: time="2024-12-13T14:11:32.418497450Z" level=info msg="Container to stop \"06f3df048a40f4c97c2b7e6a3b15e4a32adda03c219acfbc0fad7ea4832975b6\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 14:11:32.421333 env[1556]: time="2024-12-13T14:11:32.418510250Z" level=info msg="Container to stop \"53be5744bfea4b5d48ad8c371302fa81d12237e990b93fdc3ee0935dd7cee735\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 14:11:32.420489 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-0915bd42418be238d8049f315d26b683dc4a89a1883cf1b1e8217a9167498a27-shm.mount: Deactivated successfully. Dec 13 14:11:32.422241 env[1556]: time="2024-12-13T14:11:32.421846073Z" level=info msg="StopContainer for \"aaa0bef837d67488332171fc045296ea5b6a5c1d5f3b2c779399f954bc0f61db\" returns successfully" Dec 13 14:11:32.422609 env[1556]: time="2024-12-13T14:11:32.422560686Z" level=info msg="StopPodSandbox for \"bf1e5912c98e419adced2eacc0c810fe4759c847dd9b2255ef7dd479ee788759\"" Dec 13 14:11:32.422749 env[1556]: time="2024-12-13T14:11:32.422728290Z" level=info msg="Container to stop \"aaa0bef837d67488332171fc045296ea5b6a5c1d5f3b2c779399f954bc0f61db\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 14:11:32.441986 kubelet[2717]: E1213 14:11:32.441955 2717 kubelet.go:2892] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Dec 13 14:11:32.472449 env[1556]: time="2024-12-13T14:11:32.472407065Z" level=info msg="shim disconnected" id=bf1e5912c98e419adced2eacc0c810fe4759c847dd9b2255ef7dd479ee788759 Dec 13 14:11:32.473650 env[1556]: time="2024-12-13T14:11:32.473625488Z" level=warning msg="cleaning up after shim disconnected" id=bf1e5912c98e419adced2eacc0c810fe4759c847dd9b2255ef7dd479ee788759 namespace=k8s.io Dec 13 14:11:32.473758 env[1556]: time="2024-12-13T14:11:32.473744410Z" level=info msg="cleaning up dead shim" Dec 13 14:11:32.473836 env[1556]: time="2024-12-13T14:11:32.472709391Z" level=info msg="shim disconnected" id=0915bd42418be238d8049f315d26b683dc4a89a1883cf1b1e8217a9167498a27 Dec 13 14:11:32.473913 env[1556]: time="2024-12-13T14:11:32.473897933Z" level=warning msg="cleaning up after shim disconnected" id=0915bd42418be238d8049f315d26b683dc4a89a1883cf1b1e8217a9167498a27 namespace=k8s.io Dec 13 14:11:32.474182 env[1556]: time="2024-12-13T14:11:32.474164618Z" level=info msg="cleaning up dead shim" Dec 13 14:11:32.481262 env[1556]: time="2024-12-13T14:11:32.481227831Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:11:32Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4404 runtime=io.containerd.runc.v2\n" Dec 13 14:11:32.481661 env[1556]: time="2024-12-13T14:11:32.481637799Z" level=info msg="TearDown network for sandbox \"0915bd42418be238d8049f315d26b683dc4a89a1883cf1b1e8217a9167498a27\" successfully" Dec 13 14:11:32.481749 env[1556]: time="2024-12-13T14:11:32.481732441Z" level=info msg="StopPodSandbox for \"0915bd42418be238d8049f315d26b683dc4a89a1883cf1b1e8217a9167498a27\" returns successfully" Dec 13 14:11:32.495410 env[1556]: time="2024-12-13T14:11:32.495357098Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:11:32Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4403 runtime=io.containerd.runc.v2\n" Dec 13 14:11:32.495855 env[1556]: time="2024-12-13T14:11:32.495829546Z" level=info msg="TearDown network for sandbox \"bf1e5912c98e419adced2eacc0c810fe4759c847dd9b2255ef7dd479ee788759\" successfully" Dec 13 14:11:32.495956 env[1556]: time="2024-12-13T14:11:32.495938828Z" level=info msg="StopPodSandbox for \"bf1e5912c98e419adced2eacc0c810fe4759c847dd9b2255ef7dd479ee788759\" returns successfully" Dec 13 14:11:32.628687 kubelet[2717]: I1213 14:11:32.628631 2717 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/07dc6c93-9c4d-4401-befa-fb116e2f15c6-lib-modules\") pod \"07dc6c93-9c4d-4401-befa-fb116e2f15c6\" (UID: \"07dc6c93-9c4d-4401-befa-fb116e2f15c6\") " Dec 13 14:11:32.628687 kubelet[2717]: I1213 14:11:32.628690 2717 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/07dc6c93-9c4d-4401-befa-fb116e2f15c6-cni-path\") pod \"07dc6c93-9c4d-4401-befa-fb116e2f15c6\" (UID: \"07dc6c93-9c4d-4401-befa-fb116e2f15c6\") " Dec 13 14:11:32.628895 kubelet[2717]: I1213 14:11:32.628709 2717 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/07dc6c93-9c4d-4401-befa-fb116e2f15c6-cilium-cgroup\") pod \"07dc6c93-9c4d-4401-befa-fb116e2f15c6\" (UID: \"07dc6c93-9c4d-4401-befa-fb116e2f15c6\") " Dec 13 14:11:32.628895 kubelet[2717]: I1213 14:11:32.628745 2717 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-79r5f\" (UniqueName: \"kubernetes.io/projected/07dc6c93-9c4d-4401-befa-fb116e2f15c6-kube-api-access-79r5f\") pod \"07dc6c93-9c4d-4401-befa-fb116e2f15c6\" (UID: \"07dc6c93-9c4d-4401-befa-fb116e2f15c6\") " Dec 13 14:11:32.628895 kubelet[2717]: I1213 14:11:32.628768 2717 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/07dc6c93-9c4d-4401-befa-fb116e2f15c6-clustermesh-secrets\") pod \"07dc6c93-9c4d-4401-befa-fb116e2f15c6\" (UID: \"07dc6c93-9c4d-4401-befa-fb116e2f15c6\") " Dec 13 14:11:32.628895 kubelet[2717]: I1213 14:11:32.628789 2717 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8e260ecb-657a-4bb9-bbd6-58568492ad11-cilium-config-path\") pod \"8e260ecb-657a-4bb9-bbd6-58568492ad11\" (UID: \"8e260ecb-657a-4bb9-bbd6-58568492ad11\") " Dec 13 14:11:32.628895 kubelet[2717]: I1213 14:11:32.628817 2717 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/07dc6c93-9c4d-4401-befa-fb116e2f15c6-xtables-lock\") pod \"07dc6c93-9c4d-4401-befa-fb116e2f15c6\" (UID: \"07dc6c93-9c4d-4401-befa-fb116e2f15c6\") " Dec 13 14:11:32.628895 kubelet[2717]: I1213 14:11:32.628836 2717 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/07dc6c93-9c4d-4401-befa-fb116e2f15c6-cilium-run\") pod \"07dc6c93-9c4d-4401-befa-fb116e2f15c6\" (UID: \"07dc6c93-9c4d-4401-befa-fb116e2f15c6\") " Dec 13 14:11:32.629092 kubelet[2717]: I1213 14:11:32.628855 2717 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/07dc6c93-9c4d-4401-befa-fb116e2f15c6-bpf-maps\") pod \"07dc6c93-9c4d-4401-befa-fb116e2f15c6\" (UID: \"07dc6c93-9c4d-4401-befa-fb116e2f15c6\") " Dec 13 14:11:32.629092 kubelet[2717]: I1213 14:11:32.628873 2717 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/07dc6c93-9c4d-4401-befa-fb116e2f15c6-hostproc\") pod \"07dc6c93-9c4d-4401-befa-fb116e2f15c6\" (UID: \"07dc6c93-9c4d-4401-befa-fb116e2f15c6\") " Dec 13 14:11:32.629092 kubelet[2717]: I1213 14:11:32.628901 2717 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/07dc6c93-9c4d-4401-befa-fb116e2f15c6-host-proc-sys-kernel\") pod \"07dc6c93-9c4d-4401-befa-fb116e2f15c6\" (UID: \"07dc6c93-9c4d-4401-befa-fb116e2f15c6\") " Dec 13 14:11:32.629092 kubelet[2717]: I1213 14:11:32.628921 2717 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/07dc6c93-9c4d-4401-befa-fb116e2f15c6-hubble-tls\") pod \"07dc6c93-9c4d-4401-befa-fb116e2f15c6\" (UID: \"07dc6c93-9c4d-4401-befa-fb116e2f15c6\") " Dec 13 14:11:32.629092 kubelet[2717]: I1213 14:11:32.628943 2717 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/07dc6c93-9c4d-4401-befa-fb116e2f15c6-cilium-config-path\") pod \"07dc6c93-9c4d-4401-befa-fb116e2f15c6\" (UID: \"07dc6c93-9c4d-4401-befa-fb116e2f15c6\") " Dec 13 14:11:32.629092 kubelet[2717]: I1213 14:11:32.628970 2717 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/07dc6c93-9c4d-4401-befa-fb116e2f15c6-host-proc-sys-net\") pod \"07dc6c93-9c4d-4401-befa-fb116e2f15c6\" (UID: \"07dc6c93-9c4d-4401-befa-fb116e2f15c6\") " Dec 13 14:11:32.629235 kubelet[2717]: I1213 14:11:32.629027 2717 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/07dc6c93-9c4d-4401-befa-fb116e2f15c6-etc-cni-netd\") pod \"07dc6c93-9c4d-4401-befa-fb116e2f15c6\" (UID: \"07dc6c93-9c4d-4401-befa-fb116e2f15c6\") " Dec 13 14:11:32.629235 kubelet[2717]: I1213 14:11:32.629057 2717 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tvrwl\" (UniqueName: \"kubernetes.io/projected/8e260ecb-657a-4bb9-bbd6-58568492ad11-kube-api-access-tvrwl\") pod \"8e260ecb-657a-4bb9-bbd6-58568492ad11\" (UID: \"8e260ecb-657a-4bb9-bbd6-58568492ad11\") " Dec 13 14:11:32.629344 kubelet[2717]: I1213 14:11:32.629322 2717 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/07dc6c93-9c4d-4401-befa-fb116e2f15c6-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "07dc6c93-9c4d-4401-befa-fb116e2f15c6" (UID: "07dc6c93-9c4d-4401-befa-fb116e2f15c6"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:11:32.629436 kubelet[2717]: I1213 14:11:32.629423 2717 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/07dc6c93-9c4d-4401-befa-fb116e2f15c6-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "07dc6c93-9c4d-4401-befa-fb116e2f15c6" (UID: "07dc6c93-9c4d-4401-befa-fb116e2f15c6"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:11:32.629507 kubelet[2717]: I1213 14:11:32.629496 2717 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/07dc6c93-9c4d-4401-befa-fb116e2f15c6-cni-path" (OuterVolumeSpecName: "cni-path") pod "07dc6c93-9c4d-4401-befa-fb116e2f15c6" (UID: "07dc6c93-9c4d-4401-befa-fb116e2f15c6"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:11:32.629576 kubelet[2717]: I1213 14:11:32.629564 2717 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/07dc6c93-9c4d-4401-befa-fb116e2f15c6-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "07dc6c93-9c4d-4401-befa-fb116e2f15c6" (UID: "07dc6c93-9c4d-4401-befa-fb116e2f15c6"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:11:32.629775 kubelet[2717]: I1213 14:11:32.629739 2717 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/07dc6c93-9c4d-4401-befa-fb116e2f15c6-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "07dc6c93-9c4d-4401-befa-fb116e2f15c6" (UID: "07dc6c93-9c4d-4401-befa-fb116e2f15c6"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:11:32.629829 kubelet[2717]: I1213 14:11:32.629778 2717 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/07dc6c93-9c4d-4401-befa-fb116e2f15c6-hostproc" (OuterVolumeSpecName: "hostproc") pod "07dc6c93-9c4d-4401-befa-fb116e2f15c6" (UID: "07dc6c93-9c4d-4401-befa-fb116e2f15c6"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:11:32.629829 kubelet[2717]: I1213 14:11:32.629808 2717 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/07dc6c93-9c4d-4401-befa-fb116e2f15c6-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "07dc6c93-9c4d-4401-befa-fb116e2f15c6" (UID: "07dc6c93-9c4d-4401-befa-fb116e2f15c6"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:11:32.634020 kubelet[2717]: I1213 14:11:32.633950 2717 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8e260ecb-657a-4bb9-bbd6-58568492ad11-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "8e260ecb-657a-4bb9-bbd6-58568492ad11" (UID: "8e260ecb-657a-4bb9-bbd6-58568492ad11"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 14:11:32.634020 kubelet[2717]: I1213 14:11:32.634018 2717 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/07dc6c93-9c4d-4401-befa-fb116e2f15c6-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "07dc6c93-9c4d-4401-befa-fb116e2f15c6" (UID: "07dc6c93-9c4d-4401-befa-fb116e2f15c6"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:11:32.634155 kubelet[2717]: I1213 14:11:32.634085 2717 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/07dc6c93-9c4d-4401-befa-fb116e2f15c6-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "07dc6c93-9c4d-4401-befa-fb116e2f15c6" (UID: "07dc6c93-9c4d-4401-befa-fb116e2f15c6"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:11:32.634446 kubelet[2717]: I1213 14:11:32.634427 2717 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/07dc6c93-9c4d-4401-befa-fb116e2f15c6-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "07dc6c93-9c4d-4401-befa-fb116e2f15c6" (UID: "07dc6c93-9c4d-4401-befa-fb116e2f15c6"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:11:32.634597 kubelet[2717]: I1213 14:11:32.634579 2717 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/07dc6c93-9c4d-4401-befa-fb116e2f15c6-kube-api-access-79r5f" (OuterVolumeSpecName: "kube-api-access-79r5f") pod "07dc6c93-9c4d-4401-befa-fb116e2f15c6" (UID: "07dc6c93-9c4d-4401-befa-fb116e2f15c6"). InnerVolumeSpecName "kube-api-access-79r5f". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 14:11:32.636153 kubelet[2717]: I1213 14:11:32.636117 2717 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/07dc6c93-9c4d-4401-befa-fb116e2f15c6-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "07dc6c93-9c4d-4401-befa-fb116e2f15c6" (UID: "07dc6c93-9c4d-4401-befa-fb116e2f15c6"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 14:11:32.636659 kubelet[2717]: I1213 14:11:32.636632 2717 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/07dc6c93-9c4d-4401-befa-fb116e2f15c6-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "07dc6c93-9c4d-4401-befa-fb116e2f15c6" (UID: "07dc6c93-9c4d-4401-befa-fb116e2f15c6"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 14:11:32.636978 kubelet[2717]: I1213 14:11:32.636945 2717 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8e260ecb-657a-4bb9-bbd6-58568492ad11-kube-api-access-tvrwl" (OuterVolumeSpecName: "kube-api-access-tvrwl") pod "8e260ecb-657a-4bb9-bbd6-58568492ad11" (UID: "8e260ecb-657a-4bb9-bbd6-58568492ad11"). InnerVolumeSpecName "kube-api-access-tvrwl". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 14:11:32.637243 kubelet[2717]: I1213 14:11:32.637220 2717 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/07dc6c93-9c4d-4401-befa-fb116e2f15c6-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "07dc6c93-9c4d-4401-befa-fb116e2f15c6" (UID: "07dc6c93-9c4d-4401-befa-fb116e2f15c6"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 14:11:32.729484 kubelet[2717]: I1213 14:11:32.729448 2717 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/07dc6c93-9c4d-4401-befa-fb116e2f15c6-hostproc\") on node \"ci-3510.3.6-a-18113e8891\" DevicePath \"\"" Dec 13 14:11:32.729484 kubelet[2717]: I1213 14:11:32.729486 2717 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/07dc6c93-9c4d-4401-befa-fb116e2f15c6-host-proc-sys-kernel\") on node \"ci-3510.3.6-a-18113e8891\" DevicePath \"\"" Dec 13 14:11:32.729666 kubelet[2717]: I1213 14:11:32.729498 2717 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/07dc6c93-9c4d-4401-befa-fb116e2f15c6-hubble-tls\") on node \"ci-3510.3.6-a-18113e8891\" DevicePath \"\"" Dec 13 14:11:32.729666 kubelet[2717]: I1213 14:11:32.729509 2717 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/07dc6c93-9c4d-4401-befa-fb116e2f15c6-cilium-config-path\") on node \"ci-3510.3.6-a-18113e8891\" DevicePath \"\"" Dec 13 14:11:32.729666 kubelet[2717]: I1213 14:11:32.729522 2717 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/07dc6c93-9c4d-4401-befa-fb116e2f15c6-host-proc-sys-net\") on node \"ci-3510.3.6-a-18113e8891\" DevicePath \"\"" Dec 13 14:11:32.729666 kubelet[2717]: I1213 14:11:32.729533 2717 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/07dc6c93-9c4d-4401-befa-fb116e2f15c6-etc-cni-netd\") on node \"ci-3510.3.6-a-18113e8891\" DevicePath \"\"" Dec 13 14:11:32.729666 kubelet[2717]: I1213 14:11:32.729545 2717 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-tvrwl\" (UniqueName: \"kubernetes.io/projected/8e260ecb-657a-4bb9-bbd6-58568492ad11-kube-api-access-tvrwl\") on node \"ci-3510.3.6-a-18113e8891\" DevicePath \"\"" Dec 13 14:11:32.729666 kubelet[2717]: I1213 14:11:32.729555 2717 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/07dc6c93-9c4d-4401-befa-fb116e2f15c6-lib-modules\") on node \"ci-3510.3.6-a-18113e8891\" DevicePath \"\"" Dec 13 14:11:32.729666 kubelet[2717]: I1213 14:11:32.729564 2717 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/07dc6c93-9c4d-4401-befa-fb116e2f15c6-cni-path\") on node \"ci-3510.3.6-a-18113e8891\" DevicePath \"\"" Dec 13 14:11:32.729666 kubelet[2717]: I1213 14:11:32.729573 2717 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/07dc6c93-9c4d-4401-befa-fb116e2f15c6-cilium-cgroup\") on node \"ci-3510.3.6-a-18113e8891\" DevicePath \"\"" Dec 13 14:11:32.729848 kubelet[2717]: I1213 14:11:32.729582 2717 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/07dc6c93-9c4d-4401-befa-fb116e2f15c6-xtables-lock\") on node \"ci-3510.3.6-a-18113e8891\" DevicePath \"\"" Dec 13 14:11:32.729848 kubelet[2717]: I1213 14:11:32.729591 2717 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/07dc6c93-9c4d-4401-befa-fb116e2f15c6-cilium-run\") on node \"ci-3510.3.6-a-18113e8891\" DevicePath \"\"" Dec 13 14:11:32.729848 kubelet[2717]: I1213 14:11:32.729602 2717 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-79r5f\" (UniqueName: \"kubernetes.io/projected/07dc6c93-9c4d-4401-befa-fb116e2f15c6-kube-api-access-79r5f\") on node \"ci-3510.3.6-a-18113e8891\" DevicePath \"\"" Dec 13 14:11:32.729848 kubelet[2717]: I1213 14:11:32.729612 2717 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/07dc6c93-9c4d-4401-befa-fb116e2f15c6-clustermesh-secrets\") on node \"ci-3510.3.6-a-18113e8891\" DevicePath \"\"" Dec 13 14:11:32.729848 kubelet[2717]: I1213 14:11:32.729622 2717 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8e260ecb-657a-4bb9-bbd6-58568492ad11-cilium-config-path\") on node \"ci-3510.3.6-a-18113e8891\" DevicePath \"\"" Dec 13 14:11:32.729848 kubelet[2717]: I1213 14:11:32.729631 2717 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/07dc6c93-9c4d-4401-befa-fb116e2f15c6-bpf-maps\") on node \"ci-3510.3.6-a-18113e8891\" DevicePath \"\"" Dec 13 14:11:32.765040 kubelet[2717]: I1213 14:11:32.765018 2717 scope.go:117] "RemoveContainer" containerID="aaa0bef837d67488332171fc045296ea5b6a5c1d5f3b2c779399f954bc0f61db" Dec 13 14:11:32.768336 env[1556]: time="2024-12-13T14:11:32.767833350Z" level=info msg="RemoveContainer for \"aaa0bef837d67488332171fc045296ea5b6a5c1d5f3b2c779399f954bc0f61db\"" Dec 13 14:11:32.781808 env[1556]: time="2024-12-13T14:11:32.781688491Z" level=info msg="RemoveContainer for \"aaa0bef837d67488332171fc045296ea5b6a5c1d5f3b2c779399f954bc0f61db\" returns successfully" Dec 13 14:11:32.781963 kubelet[2717]: I1213 14:11:32.781937 2717 scope.go:117] "RemoveContainer" containerID="aaa0bef837d67488332171fc045296ea5b6a5c1d5f3b2c779399f954bc0f61db" Dec 13 14:11:32.784306 env[1556]: time="2024-12-13T14:11:32.782212061Z" level=error msg="ContainerStatus for \"aaa0bef837d67488332171fc045296ea5b6a5c1d5f3b2c779399f954bc0f61db\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"aaa0bef837d67488332171fc045296ea5b6a5c1d5f3b2c779399f954bc0f61db\": not found" Dec 13 14:11:32.784406 kubelet[2717]: E1213 14:11:32.782584 2717 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"aaa0bef837d67488332171fc045296ea5b6a5c1d5f3b2c779399f954bc0f61db\": not found" containerID="aaa0bef837d67488332171fc045296ea5b6a5c1d5f3b2c779399f954bc0f61db" Dec 13 14:11:32.784406 kubelet[2717]: I1213 14:11:32.782699 2717 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"aaa0bef837d67488332171fc045296ea5b6a5c1d5f3b2c779399f954bc0f61db"} err="failed to get container status \"aaa0bef837d67488332171fc045296ea5b6a5c1d5f3b2c779399f954bc0f61db\": rpc error: code = NotFound desc = an error occurred when try to find container \"aaa0bef837d67488332171fc045296ea5b6a5c1d5f3b2c779399f954bc0f61db\": not found" Dec 13 14:11:32.784406 kubelet[2717]: I1213 14:11:32.782721 2717 scope.go:117] "RemoveContainer" containerID="e04459db4381b445bad3feec26327727b3d49ee3c1c549c2109222ea3a7b1454" Dec 13 14:11:32.784693 env[1556]: time="2024-12-13T14:11:32.784664307Z" level=info msg="RemoveContainer for \"e04459db4381b445bad3feec26327727b3d49ee3c1c549c2109222ea3a7b1454\"" Dec 13 14:11:32.792157 env[1556]: time="2024-12-13T14:11:32.792116927Z" level=info msg="RemoveContainer for \"e04459db4381b445bad3feec26327727b3d49ee3c1c549c2109222ea3a7b1454\" returns successfully" Dec 13 14:11:32.792491 kubelet[2717]: I1213 14:11:32.792459 2717 scope.go:117] "RemoveContainer" containerID="53be5744bfea4b5d48ad8c371302fa81d12237e990b93fdc3ee0935dd7cee735" Dec 13 14:11:32.793786 env[1556]: time="2024-12-13T14:11:32.793565634Z" level=info msg="RemoveContainer for \"53be5744bfea4b5d48ad8c371302fa81d12237e990b93fdc3ee0935dd7cee735\"" Dec 13 14:11:32.802181 env[1556]: time="2024-12-13T14:11:32.802093795Z" level=info msg="RemoveContainer for \"53be5744bfea4b5d48ad8c371302fa81d12237e990b93fdc3ee0935dd7cee735\" returns successfully" Dec 13 14:11:32.802427 kubelet[2717]: I1213 14:11:32.802404 2717 scope.go:117] "RemoveContainer" containerID="06f3df048a40f4c97c2b7e6a3b15e4a32adda03c219acfbc0fad7ea4832975b6" Dec 13 14:11:32.803515 env[1556]: time="2024-12-13T14:11:32.803465541Z" level=info msg="RemoveContainer for \"06f3df048a40f4c97c2b7e6a3b15e4a32adda03c219acfbc0fad7ea4832975b6\"" Dec 13 14:11:32.810598 env[1556]: time="2024-12-13T14:11:32.810563635Z" level=info msg="RemoveContainer for \"06f3df048a40f4c97c2b7e6a3b15e4a32adda03c219acfbc0fad7ea4832975b6\" returns successfully" Dec 13 14:11:32.810827 kubelet[2717]: I1213 14:11:32.810811 2717 scope.go:117] "RemoveContainer" containerID="0169bf0781ac34b1e6261537efbbac2e97f3d01c9c25bb5a633a0ec14f242b9f" Dec 13 14:11:32.811776 env[1556]: time="2024-12-13T14:11:32.811750977Z" level=info msg="RemoveContainer for \"0169bf0781ac34b1e6261537efbbac2e97f3d01c9c25bb5a633a0ec14f242b9f\"" Dec 13 14:11:32.821685 env[1556]: time="2024-12-13T14:11:32.821652083Z" level=info msg="RemoveContainer for \"0169bf0781ac34b1e6261537efbbac2e97f3d01c9c25bb5a633a0ec14f242b9f\" returns successfully" Dec 13 14:11:32.822038 kubelet[2717]: I1213 14:11:32.822020 2717 scope.go:117] "RemoveContainer" containerID="b57decaf99e35f7f70cf58521d5d813a4db85ecd76c08d2089079b1c68e08ef3" Dec 13 14:11:32.823332 env[1556]: time="2024-12-13T14:11:32.823307515Z" level=info msg="RemoveContainer for \"b57decaf99e35f7f70cf58521d5d813a4db85ecd76c08d2089079b1c68e08ef3\"" Dec 13 14:11:32.829734 env[1556]: time="2024-12-13T14:11:32.829706155Z" level=info msg="RemoveContainer for \"b57decaf99e35f7f70cf58521d5d813a4db85ecd76c08d2089079b1c68e08ef3\" returns successfully" Dec 13 14:11:32.830053 kubelet[2717]: I1213 14:11:32.830035 2717 scope.go:117] "RemoveContainer" containerID="e04459db4381b445bad3feec26327727b3d49ee3c1c549c2109222ea3a7b1454" Dec 13 14:11:32.830452 env[1556]: time="2024-12-13T14:11:32.830401768Z" level=error msg="ContainerStatus for \"e04459db4381b445bad3feec26327727b3d49ee3c1c549c2109222ea3a7b1454\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"e04459db4381b445bad3feec26327727b3d49ee3c1c549c2109222ea3a7b1454\": not found" Dec 13 14:11:32.830679 kubelet[2717]: E1213 14:11:32.830664 2717 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"e04459db4381b445bad3feec26327727b3d49ee3c1c549c2109222ea3a7b1454\": not found" containerID="e04459db4381b445bad3feec26327727b3d49ee3c1c549c2109222ea3a7b1454" Dec 13 14:11:32.830790 kubelet[2717]: I1213 14:11:32.830777 2717 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"e04459db4381b445bad3feec26327727b3d49ee3c1c549c2109222ea3a7b1454"} err="failed to get container status \"e04459db4381b445bad3feec26327727b3d49ee3c1c549c2109222ea3a7b1454\": rpc error: code = NotFound desc = an error occurred when try to find container \"e04459db4381b445bad3feec26327727b3d49ee3c1c549c2109222ea3a7b1454\": not found" Dec 13 14:11:32.830852 kubelet[2717]: I1213 14:11:32.830843 2717 scope.go:117] "RemoveContainer" containerID="53be5744bfea4b5d48ad8c371302fa81d12237e990b93fdc3ee0935dd7cee735" Dec 13 14:11:32.831119 env[1556]: time="2024-12-13T14:11:32.831049180Z" level=error msg="ContainerStatus for \"53be5744bfea4b5d48ad8c371302fa81d12237e990b93fdc3ee0935dd7cee735\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"53be5744bfea4b5d48ad8c371302fa81d12237e990b93fdc3ee0935dd7cee735\": not found" Dec 13 14:11:32.831279 kubelet[2717]: E1213 14:11:32.831265 2717 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"53be5744bfea4b5d48ad8c371302fa81d12237e990b93fdc3ee0935dd7cee735\": not found" containerID="53be5744bfea4b5d48ad8c371302fa81d12237e990b93fdc3ee0935dd7cee735" Dec 13 14:11:32.831373 kubelet[2717]: I1213 14:11:32.831362 2717 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"53be5744bfea4b5d48ad8c371302fa81d12237e990b93fdc3ee0935dd7cee735"} err="failed to get container status \"53be5744bfea4b5d48ad8c371302fa81d12237e990b93fdc3ee0935dd7cee735\": rpc error: code = NotFound desc = an error occurred when try to find container \"53be5744bfea4b5d48ad8c371302fa81d12237e990b93fdc3ee0935dd7cee735\": not found" Dec 13 14:11:32.831440 kubelet[2717]: I1213 14:11:32.831431 2717 scope.go:117] "RemoveContainer" containerID="06f3df048a40f4c97c2b7e6a3b15e4a32adda03c219acfbc0fad7ea4832975b6" Dec 13 14:11:32.831720 env[1556]: time="2024-12-13T14:11:32.831683152Z" level=error msg="ContainerStatus for \"06f3df048a40f4c97c2b7e6a3b15e4a32adda03c219acfbc0fad7ea4832975b6\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"06f3df048a40f4c97c2b7e6a3b15e4a32adda03c219acfbc0fad7ea4832975b6\": not found" Dec 13 14:11:32.831905 kubelet[2717]: E1213 14:11:32.831893 2717 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"06f3df048a40f4c97c2b7e6a3b15e4a32adda03c219acfbc0fad7ea4832975b6\": not found" containerID="06f3df048a40f4c97c2b7e6a3b15e4a32adda03c219acfbc0fad7ea4832975b6" Dec 13 14:11:32.832010 kubelet[2717]: I1213 14:11:32.831999 2717 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"06f3df048a40f4c97c2b7e6a3b15e4a32adda03c219acfbc0fad7ea4832975b6"} err="failed to get container status \"06f3df048a40f4c97c2b7e6a3b15e4a32adda03c219acfbc0fad7ea4832975b6\": rpc error: code = NotFound desc = an error occurred when try to find container \"06f3df048a40f4c97c2b7e6a3b15e4a32adda03c219acfbc0fad7ea4832975b6\": not found" Dec 13 14:11:32.832103 kubelet[2717]: I1213 14:11:32.832063 2717 scope.go:117] "RemoveContainer" containerID="0169bf0781ac34b1e6261537efbbac2e97f3d01c9c25bb5a633a0ec14f242b9f" Dec 13 14:11:32.832402 env[1556]: time="2024-12-13T14:11:32.832357965Z" level=error msg="ContainerStatus for \"0169bf0781ac34b1e6261537efbbac2e97f3d01c9c25bb5a633a0ec14f242b9f\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"0169bf0781ac34b1e6261537efbbac2e97f3d01c9c25bb5a633a0ec14f242b9f\": not found" Dec 13 14:11:32.832659 kubelet[2717]: E1213 14:11:32.832627 2717 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"0169bf0781ac34b1e6261537efbbac2e97f3d01c9c25bb5a633a0ec14f242b9f\": not found" containerID="0169bf0781ac34b1e6261537efbbac2e97f3d01c9c25bb5a633a0ec14f242b9f" Dec 13 14:11:32.832773 kubelet[2717]: I1213 14:11:32.832670 2717 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"0169bf0781ac34b1e6261537efbbac2e97f3d01c9c25bb5a633a0ec14f242b9f"} err="failed to get container status \"0169bf0781ac34b1e6261537efbbac2e97f3d01c9c25bb5a633a0ec14f242b9f\": rpc error: code = NotFound desc = an error occurred when try to find container \"0169bf0781ac34b1e6261537efbbac2e97f3d01c9c25bb5a633a0ec14f242b9f\": not found" Dec 13 14:11:32.832773 kubelet[2717]: I1213 14:11:32.832697 2717 scope.go:117] "RemoveContainer" containerID="b57decaf99e35f7f70cf58521d5d813a4db85ecd76c08d2089079b1c68e08ef3" Dec 13 14:11:32.833002 env[1556]: time="2024-12-13T14:11:32.832951136Z" level=error msg="ContainerStatus for \"b57decaf99e35f7f70cf58521d5d813a4db85ecd76c08d2089079b1c68e08ef3\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b57decaf99e35f7f70cf58521d5d813a4db85ecd76c08d2089079b1c68e08ef3\": not found" Dec 13 14:11:32.833218 kubelet[2717]: E1213 14:11:32.833193 2717 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b57decaf99e35f7f70cf58521d5d813a4db85ecd76c08d2089079b1c68e08ef3\": not found" containerID="b57decaf99e35f7f70cf58521d5d813a4db85ecd76c08d2089079b1c68e08ef3" Dec 13 14:11:32.833274 kubelet[2717]: I1213 14:11:32.833219 2717 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"b57decaf99e35f7f70cf58521d5d813a4db85ecd76c08d2089079b1c68e08ef3"} err="failed to get container status \"b57decaf99e35f7f70cf58521d5d813a4db85ecd76c08d2089079b1c68e08ef3\": rpc error: code = NotFound desc = an error occurred when try to find container \"b57decaf99e35f7f70cf58521d5d813a4db85ecd76c08d2089079b1c68e08ef3\": not found" Dec 13 14:11:33.261651 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bf1e5912c98e419adced2eacc0c810fe4759c847dd9b2255ef7dd479ee788759-rootfs.mount: Deactivated successfully. Dec 13 14:11:33.261790 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-bf1e5912c98e419adced2eacc0c810fe4759c847dd9b2255ef7dd479ee788759-shm.mount: Deactivated successfully. Dec 13 14:11:33.261881 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0915bd42418be238d8049f315d26b683dc4a89a1883cf1b1e8217a9167498a27-rootfs.mount: Deactivated successfully. Dec 13 14:11:33.261961 systemd[1]: var-lib-kubelet-pods-8e260ecb\x2d657a\x2d4bb9\x2dbbd6\x2d58568492ad11-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dtvrwl.mount: Deactivated successfully. Dec 13 14:11:33.262055 systemd[1]: var-lib-kubelet-pods-07dc6c93\x2d9c4d\x2d4401\x2dbefa\x2dfb116e2f15c6-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d79r5f.mount: Deactivated successfully. Dec 13 14:11:33.262150 systemd[1]: var-lib-kubelet-pods-07dc6c93\x2d9c4d\x2d4401\x2dbefa\x2dfb116e2f15c6-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Dec 13 14:11:33.262227 systemd[1]: var-lib-kubelet-pods-07dc6c93\x2d9c4d\x2d4401\x2dbefa\x2dfb116e2f15c6-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Dec 13 14:11:33.325199 kubelet[2717]: I1213 14:11:33.325165 2717 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="07dc6c93-9c4d-4401-befa-fb116e2f15c6" path="/var/lib/kubelet/pods/07dc6c93-9c4d-4401-befa-fb116e2f15c6/volumes" Dec 13 14:11:33.325727 kubelet[2717]: I1213 14:11:33.325707 2717 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="8e260ecb-657a-4bb9-bbd6-58568492ad11" path="/var/lib/kubelet/pods/8e260ecb-657a-4bb9-bbd6-58568492ad11/volumes" Dec 13 14:11:34.300295 sshd[4263]: pam_unix(sshd:session): session closed for user core Dec 13 14:11:34.302806 systemd[1]: sshd@20-10.200.20.41:22-10.200.16.10:38856.service: Deactivated successfully. Dec 13 14:11:34.303916 systemd[1]: session-23.scope: Deactivated successfully. Dec 13 14:11:34.303945 systemd-logind[1542]: Session 23 logged out. Waiting for processes to exit. Dec 13 14:11:34.305269 systemd-logind[1542]: Removed session 23. Dec 13 14:11:34.372483 systemd[1]: Started sshd@21-10.200.20.41:22-10.200.16.10:38870.service. Dec 13 14:11:34.817397 sshd[4438]: Accepted publickey for core from 10.200.16.10 port 38870 ssh2: RSA SHA256:xuCpWY3jYETt01AJgPfaKRWNP61F/EGSdrGlXn/pObI Dec 13 14:11:34.818642 sshd[4438]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:11:34.823059 systemd[1]: Started session-24.scope. Dec 13 14:11:34.823412 systemd-logind[1542]: New session 24 of user core. Dec 13 14:11:35.793289 kubelet[2717]: I1213 14:11:35.793251 2717 topology_manager.go:215] "Topology Admit Handler" podUID="8d35c42b-9dbf-44ab-a978-c5da2236455b" podNamespace="kube-system" podName="cilium-pbxbf" Dec 13 14:11:35.793695 kubelet[2717]: E1213 14:11:35.793317 2717 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="07dc6c93-9c4d-4401-befa-fb116e2f15c6" containerName="mount-cgroup" Dec 13 14:11:35.793695 kubelet[2717]: E1213 14:11:35.793329 2717 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="07dc6c93-9c4d-4401-befa-fb116e2f15c6" containerName="apply-sysctl-overwrites" Dec 13 14:11:35.793695 kubelet[2717]: E1213 14:11:35.793336 2717 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="07dc6c93-9c4d-4401-befa-fb116e2f15c6" containerName="mount-bpf-fs" Dec 13 14:11:35.793695 kubelet[2717]: E1213 14:11:35.793342 2717 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="8e260ecb-657a-4bb9-bbd6-58568492ad11" containerName="cilium-operator" Dec 13 14:11:35.793695 kubelet[2717]: E1213 14:11:35.793349 2717 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="07dc6c93-9c4d-4401-befa-fb116e2f15c6" containerName="clean-cilium-state" Dec 13 14:11:35.793695 kubelet[2717]: E1213 14:11:35.793356 2717 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="07dc6c93-9c4d-4401-befa-fb116e2f15c6" containerName="cilium-agent" Dec 13 14:11:35.793695 kubelet[2717]: I1213 14:11:35.793385 2717 memory_manager.go:354] "RemoveStaleState removing state" podUID="07dc6c93-9c4d-4401-befa-fb116e2f15c6" containerName="cilium-agent" Dec 13 14:11:35.793695 kubelet[2717]: I1213 14:11:35.793393 2717 memory_manager.go:354] "RemoveStaleState removing state" podUID="8e260ecb-657a-4bb9-bbd6-58568492ad11" containerName="cilium-operator" Dec 13 14:11:35.844129 kubelet[2717]: I1213 14:11:35.844093 2717 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/8d35c42b-9dbf-44ab-a978-c5da2236455b-cilium-run\") pod \"cilium-pbxbf\" (UID: \"8d35c42b-9dbf-44ab-a978-c5da2236455b\") " pod="kube-system/cilium-pbxbf" Dec 13 14:11:35.844129 kubelet[2717]: I1213 14:11:35.844134 2717 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/8d35c42b-9dbf-44ab-a978-c5da2236455b-host-proc-sys-kernel\") pod \"cilium-pbxbf\" (UID: \"8d35c42b-9dbf-44ab-a978-c5da2236455b\") " pod="kube-system/cilium-pbxbf" Dec 13 14:11:35.844301 kubelet[2717]: I1213 14:11:35.844166 2717 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/8d35c42b-9dbf-44ab-a978-c5da2236455b-hostproc\") pod \"cilium-pbxbf\" (UID: \"8d35c42b-9dbf-44ab-a978-c5da2236455b\") " pod="kube-system/cilium-pbxbf" Dec 13 14:11:35.844301 kubelet[2717]: I1213 14:11:35.844193 2717 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/8d35c42b-9dbf-44ab-a978-c5da2236455b-clustermesh-secrets\") pod \"cilium-pbxbf\" (UID: \"8d35c42b-9dbf-44ab-a978-c5da2236455b\") " pod="kube-system/cilium-pbxbf" Dec 13 14:11:35.844301 kubelet[2717]: I1213 14:11:35.844231 2717 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8d35c42b-9dbf-44ab-a978-c5da2236455b-cilium-config-path\") pod \"cilium-pbxbf\" (UID: \"8d35c42b-9dbf-44ab-a978-c5da2236455b\") " pod="kube-system/cilium-pbxbf" Dec 13 14:11:35.844301 kubelet[2717]: I1213 14:11:35.844252 2717 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/8d35c42b-9dbf-44ab-a978-c5da2236455b-host-proc-sys-net\") pod \"cilium-pbxbf\" (UID: \"8d35c42b-9dbf-44ab-a978-c5da2236455b\") " pod="kube-system/cilium-pbxbf" Dec 13 14:11:35.844301 kubelet[2717]: I1213 14:11:35.844273 2717 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nnbg2\" (UniqueName: \"kubernetes.io/projected/8d35c42b-9dbf-44ab-a978-c5da2236455b-kube-api-access-nnbg2\") pod \"cilium-pbxbf\" (UID: \"8d35c42b-9dbf-44ab-a978-c5da2236455b\") " pod="kube-system/cilium-pbxbf" Dec 13 14:11:35.844431 kubelet[2717]: I1213 14:11:35.844292 2717 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8d35c42b-9dbf-44ab-a978-c5da2236455b-etc-cni-netd\") pod \"cilium-pbxbf\" (UID: \"8d35c42b-9dbf-44ab-a978-c5da2236455b\") " pod="kube-system/cilium-pbxbf" Dec 13 14:11:35.844431 kubelet[2717]: I1213 14:11:35.844320 2717 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8d35c42b-9dbf-44ab-a978-c5da2236455b-lib-modules\") pod \"cilium-pbxbf\" (UID: \"8d35c42b-9dbf-44ab-a978-c5da2236455b\") " pod="kube-system/cilium-pbxbf" Dec 13 14:11:35.844431 kubelet[2717]: I1213 14:11:35.844342 2717 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/8d35c42b-9dbf-44ab-a978-c5da2236455b-cilium-ipsec-secrets\") pod \"cilium-pbxbf\" (UID: \"8d35c42b-9dbf-44ab-a978-c5da2236455b\") " pod="kube-system/cilium-pbxbf" Dec 13 14:11:35.844431 kubelet[2717]: I1213 14:11:35.844361 2717 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8d35c42b-9dbf-44ab-a978-c5da2236455b-xtables-lock\") pod \"cilium-pbxbf\" (UID: \"8d35c42b-9dbf-44ab-a978-c5da2236455b\") " pod="kube-system/cilium-pbxbf" Dec 13 14:11:35.844431 kubelet[2717]: I1213 14:11:35.844390 2717 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/8d35c42b-9dbf-44ab-a978-c5da2236455b-hubble-tls\") pod \"cilium-pbxbf\" (UID: \"8d35c42b-9dbf-44ab-a978-c5da2236455b\") " pod="kube-system/cilium-pbxbf" Dec 13 14:11:35.844431 kubelet[2717]: I1213 14:11:35.844412 2717 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/8d35c42b-9dbf-44ab-a978-c5da2236455b-bpf-maps\") pod \"cilium-pbxbf\" (UID: \"8d35c42b-9dbf-44ab-a978-c5da2236455b\") " pod="kube-system/cilium-pbxbf" Dec 13 14:11:35.844577 kubelet[2717]: I1213 14:11:35.844433 2717 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/8d35c42b-9dbf-44ab-a978-c5da2236455b-cilium-cgroup\") pod \"cilium-pbxbf\" (UID: \"8d35c42b-9dbf-44ab-a978-c5da2236455b\") " pod="kube-system/cilium-pbxbf" Dec 13 14:11:35.844577 kubelet[2717]: I1213 14:11:35.844454 2717 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/8d35c42b-9dbf-44ab-a978-c5da2236455b-cni-path\") pod \"cilium-pbxbf\" (UID: \"8d35c42b-9dbf-44ab-a978-c5da2236455b\") " pod="kube-system/cilium-pbxbf" Dec 13 14:11:35.871251 sshd[4438]: pam_unix(sshd:session): session closed for user core Dec 13 14:11:35.874173 systemd[1]: sshd@21-10.200.20.41:22-10.200.16.10:38870.service: Deactivated successfully. Dec 13 14:11:35.874924 systemd[1]: session-24.scope: Deactivated successfully. Dec 13 14:11:35.875520 systemd-logind[1542]: Session 24 logged out. Waiting for processes to exit. Dec 13 14:11:35.876365 systemd-logind[1542]: Removed session 24. Dec 13 14:11:35.943157 systemd[1]: Started sshd@22-10.200.20.41:22-10.200.16.10:38880.service. Dec 13 14:11:36.097682 env[1556]: time="2024-12-13T14:11:36.097574175Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-pbxbf,Uid:8d35c42b-9dbf-44ab-a978-c5da2236455b,Namespace:kube-system,Attempt:0,}" Dec 13 14:11:36.134556 env[1556]: time="2024-12-13T14:11:36.134369128Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:11:36.134556 env[1556]: time="2024-12-13T14:11:36.134407568Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:11:36.134556 env[1556]: time="2024-12-13T14:11:36.134417329Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:11:36.134962 env[1556]: time="2024-12-13T14:11:36.134863015Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/ab0dc9858b85112408f1a18bdaf2d1e07be2dcaee1c0b2f2ce34acaf149fbd73 pid=4462 runtime=io.containerd.runc.v2 Dec 13 14:11:36.165932 env[1556]: time="2024-12-13T14:11:36.165885641Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-pbxbf,Uid:8d35c42b-9dbf-44ab-a978-c5da2236455b,Namespace:kube-system,Attempt:0,} returns sandbox id \"ab0dc9858b85112408f1a18bdaf2d1e07be2dcaee1c0b2f2ce34acaf149fbd73\"" Dec 13 14:11:36.169124 env[1556]: time="2024-12-13T14:11:36.169095089Z" level=info msg="CreateContainer within sandbox \"ab0dc9858b85112408f1a18bdaf2d1e07be2dcaee1c0b2f2ce34acaf149fbd73\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Dec 13 14:11:36.201719 env[1556]: time="2024-12-13T14:11:36.201678499Z" level=info msg="CreateContainer within sandbox \"ab0dc9858b85112408f1a18bdaf2d1e07be2dcaee1c0b2f2ce34acaf149fbd73\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"29391162a3fd808561218bccab01e6e99342cab21cdf007adb01328cf00698c5\"" Dec 13 14:11:36.203603 env[1556]: time="2024-12-13T14:11:36.203336323Z" level=info msg="StartContainer for \"29391162a3fd808561218bccab01e6e99342cab21cdf007adb01328cf00698c5\"" Dec 13 14:11:36.253165 env[1556]: time="2024-12-13T14:11:36.250557913Z" level=info msg="StartContainer for \"29391162a3fd808561218bccab01e6e99342cab21cdf007adb01328cf00698c5\" returns successfully" Dec 13 14:11:36.349815 env[1556]: time="2024-12-13T14:11:36.349474838Z" level=info msg="shim disconnected" id=29391162a3fd808561218bccab01e6e99342cab21cdf007adb01328cf00698c5 Dec 13 14:11:36.349815 env[1556]: time="2024-12-13T14:11:36.349520559Z" level=warning msg="cleaning up after shim disconnected" id=29391162a3fd808561218bccab01e6e99342cab21cdf007adb01328cf00698c5 namespace=k8s.io Dec 13 14:11:36.349815 env[1556]: time="2024-12-13T14:11:36.349529319Z" level=info msg="cleaning up dead shim" Dec 13 14:11:36.356613 env[1556]: time="2024-12-13T14:11:36.356570984Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:11:36Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4545 runtime=io.containerd.runc.v2\n" Dec 13 14:11:36.402482 sshd[4449]: Accepted publickey for core from 10.200.16.10 port 38880 ssh2: RSA SHA256:xuCpWY3jYETt01AJgPfaKRWNP61F/EGSdrGlXn/pObI Dec 13 14:11:36.403892 sshd[4449]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:11:36.407538 systemd-logind[1542]: New session 25 of user core. Dec 13 14:11:36.408048 systemd[1]: Started session-25.scope. Dec 13 14:11:36.781158 env[1556]: time="2024-12-13T14:11:36.781119039Z" level=info msg="StopPodSandbox for \"ab0dc9858b85112408f1a18bdaf2d1e07be2dcaee1c0b2f2ce34acaf149fbd73\"" Dec 13 14:11:36.781311 env[1556]: time="2024-12-13T14:11:36.781178640Z" level=info msg="Container to stop \"29391162a3fd808561218bccab01e6e99342cab21cdf007adb01328cf00698c5\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 14:11:36.828623 sshd[4449]: pam_unix(sshd:session): session closed for user core Dec 13 14:11:36.831058 systemd[1]: sshd@22-10.200.20.41:22-10.200.16.10:38880.service: Deactivated successfully. Dec 13 14:11:36.832256 systemd[1]: session-25.scope: Deactivated successfully. Dec 13 14:11:36.832270 systemd-logind[1542]: Session 25 logged out. Waiting for processes to exit. Dec 13 14:11:36.833542 env[1556]: time="2024-12-13T14:11:36.833495626Z" level=info msg="shim disconnected" id=ab0dc9858b85112408f1a18bdaf2d1e07be2dcaee1c0b2f2ce34acaf149fbd73 Dec 13 14:11:36.833647 env[1556]: time="2024-12-13T14:11:36.833543547Z" level=warning msg="cleaning up after shim disconnected" id=ab0dc9858b85112408f1a18bdaf2d1e07be2dcaee1c0b2f2ce34acaf149fbd73 namespace=k8s.io Dec 13 14:11:36.833647 env[1556]: time="2024-12-13T14:11:36.833553667Z" level=info msg="cleaning up dead shim" Dec 13 14:11:36.833828 systemd-logind[1542]: Removed session 25. Dec 13 14:11:36.844183 env[1556]: time="2024-12-13T14:11:36.844127105Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:11:36Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4587 runtime=io.containerd.runc.v2\n" Dec 13 14:11:36.844486 env[1556]: time="2024-12-13T14:11:36.844458870Z" level=info msg="TearDown network for sandbox \"ab0dc9858b85112408f1a18bdaf2d1e07be2dcaee1c0b2f2ce34acaf149fbd73\" successfully" Dec 13 14:11:36.844536 env[1556]: time="2024-12-13T14:11:36.844484911Z" level=info msg="StopPodSandbox for \"ab0dc9858b85112408f1a18bdaf2d1e07be2dcaee1c0b2f2ce34acaf149fbd73\" returns successfully" Dec 13 14:11:36.854100 kubelet[2717]: I1213 14:11:36.851557 2717 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8d35c42b-9dbf-44ab-a978-c5da2236455b-lib-modules\") pod \"8d35c42b-9dbf-44ab-a978-c5da2236455b\" (UID: \"8d35c42b-9dbf-44ab-a978-c5da2236455b\") " Dec 13 14:11:36.854100 kubelet[2717]: I1213 14:11:36.851595 2717 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/8d35c42b-9dbf-44ab-a978-c5da2236455b-cilium-run\") pod \"8d35c42b-9dbf-44ab-a978-c5da2236455b\" (UID: \"8d35c42b-9dbf-44ab-a978-c5da2236455b\") " Dec 13 14:11:36.854100 kubelet[2717]: I1213 14:11:36.851691 2717 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nnbg2\" (UniqueName: \"kubernetes.io/projected/8d35c42b-9dbf-44ab-a978-c5da2236455b-kube-api-access-nnbg2\") pod \"8d35c42b-9dbf-44ab-a978-c5da2236455b\" (UID: \"8d35c42b-9dbf-44ab-a978-c5da2236455b\") " Dec 13 14:11:36.854100 kubelet[2717]: I1213 14:11:36.851716 2717 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/8d35c42b-9dbf-44ab-a978-c5da2236455b-hostproc\") pod \"8d35c42b-9dbf-44ab-a978-c5da2236455b\" (UID: \"8d35c42b-9dbf-44ab-a978-c5da2236455b\") " Dec 13 14:11:36.854100 kubelet[2717]: I1213 14:11:36.851735 2717 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/8d35c42b-9dbf-44ab-a978-c5da2236455b-cni-path\") pod \"8d35c42b-9dbf-44ab-a978-c5da2236455b\" (UID: \"8d35c42b-9dbf-44ab-a978-c5da2236455b\") " Dec 13 14:11:36.854100 kubelet[2717]: I1213 14:11:36.851769 2717 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/8d35c42b-9dbf-44ab-a978-c5da2236455b-host-proc-sys-kernel\") pod \"8d35c42b-9dbf-44ab-a978-c5da2236455b\" (UID: \"8d35c42b-9dbf-44ab-a978-c5da2236455b\") " Dec 13 14:11:36.854630 kubelet[2717]: I1213 14:11:36.851789 2717 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8d35c42b-9dbf-44ab-a978-c5da2236455b-etc-cni-netd\") pod \"8d35c42b-9dbf-44ab-a978-c5da2236455b\" (UID: \"8d35c42b-9dbf-44ab-a978-c5da2236455b\") " Dec 13 14:11:36.854630 kubelet[2717]: I1213 14:11:36.851808 2717 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/8d35c42b-9dbf-44ab-a978-c5da2236455b-hubble-tls\") pod \"8d35c42b-9dbf-44ab-a978-c5da2236455b\" (UID: \"8d35c42b-9dbf-44ab-a978-c5da2236455b\") " Dec 13 14:11:36.854630 kubelet[2717]: I1213 14:11:36.851825 2717 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8d35c42b-9dbf-44ab-a978-c5da2236455b-xtables-lock\") pod \"8d35c42b-9dbf-44ab-a978-c5da2236455b\" (UID: \"8d35c42b-9dbf-44ab-a978-c5da2236455b\") " Dec 13 14:11:36.854630 kubelet[2717]: I1213 14:11:36.851857 2717 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8d35c42b-9dbf-44ab-a978-c5da2236455b-cilium-config-path\") pod \"8d35c42b-9dbf-44ab-a978-c5da2236455b\" (UID: \"8d35c42b-9dbf-44ab-a978-c5da2236455b\") " Dec 13 14:11:36.854630 kubelet[2717]: I1213 14:11:36.851880 2717 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/8d35c42b-9dbf-44ab-a978-c5da2236455b-clustermesh-secrets\") pod \"8d35c42b-9dbf-44ab-a978-c5da2236455b\" (UID: \"8d35c42b-9dbf-44ab-a978-c5da2236455b\") " Dec 13 14:11:36.854630 kubelet[2717]: I1213 14:11:36.851897 2717 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/8d35c42b-9dbf-44ab-a978-c5da2236455b-host-proc-sys-net\") pod \"8d35c42b-9dbf-44ab-a978-c5da2236455b\" (UID: \"8d35c42b-9dbf-44ab-a978-c5da2236455b\") " Dec 13 14:11:36.854790 kubelet[2717]: I1213 14:11:36.851926 2717 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/8d35c42b-9dbf-44ab-a978-c5da2236455b-cilium-ipsec-secrets\") pod \"8d35c42b-9dbf-44ab-a978-c5da2236455b\" (UID: \"8d35c42b-9dbf-44ab-a978-c5da2236455b\") " Dec 13 14:11:36.854790 kubelet[2717]: I1213 14:11:36.851946 2717 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/8d35c42b-9dbf-44ab-a978-c5da2236455b-bpf-maps\") pod \"8d35c42b-9dbf-44ab-a978-c5da2236455b\" (UID: \"8d35c42b-9dbf-44ab-a978-c5da2236455b\") " Dec 13 14:11:36.854790 kubelet[2717]: I1213 14:11:36.851964 2717 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/8d35c42b-9dbf-44ab-a978-c5da2236455b-cilium-cgroup\") pod \"8d35c42b-9dbf-44ab-a978-c5da2236455b\" (UID: \"8d35c42b-9dbf-44ab-a978-c5da2236455b\") " Dec 13 14:11:36.854790 kubelet[2717]: I1213 14:11:36.852034 2717 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8d35c42b-9dbf-44ab-a978-c5da2236455b-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "8d35c42b-9dbf-44ab-a978-c5da2236455b" (UID: "8d35c42b-9dbf-44ab-a978-c5da2236455b"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:11:36.854790 kubelet[2717]: I1213 14:11:36.852060 2717 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8d35c42b-9dbf-44ab-a978-c5da2236455b-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "8d35c42b-9dbf-44ab-a978-c5da2236455b" (UID: "8d35c42b-9dbf-44ab-a978-c5da2236455b"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:11:36.854954 kubelet[2717]: I1213 14:11:36.852099 2717 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8d35c42b-9dbf-44ab-a978-c5da2236455b-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "8d35c42b-9dbf-44ab-a978-c5da2236455b" (UID: "8d35c42b-9dbf-44ab-a978-c5da2236455b"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:11:36.855427 kubelet[2717]: I1213 14:11:36.855026 2717 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8d35c42b-9dbf-44ab-a978-c5da2236455b-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "8d35c42b-9dbf-44ab-a978-c5da2236455b" (UID: "8d35c42b-9dbf-44ab-a978-c5da2236455b"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:11:36.855427 kubelet[2717]: I1213 14:11:36.855099 2717 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8d35c42b-9dbf-44ab-a978-c5da2236455b-hostproc" (OuterVolumeSpecName: "hostproc") pod "8d35c42b-9dbf-44ab-a978-c5da2236455b" (UID: "8d35c42b-9dbf-44ab-a978-c5da2236455b"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:11:36.855427 kubelet[2717]: I1213 14:11:36.855117 2717 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8d35c42b-9dbf-44ab-a978-c5da2236455b-cni-path" (OuterVolumeSpecName: "cni-path") pod "8d35c42b-9dbf-44ab-a978-c5da2236455b" (UID: "8d35c42b-9dbf-44ab-a978-c5da2236455b"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:11:36.855427 kubelet[2717]: I1213 14:11:36.855134 2717 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8d35c42b-9dbf-44ab-a978-c5da2236455b-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "8d35c42b-9dbf-44ab-a978-c5da2236455b" (UID: "8d35c42b-9dbf-44ab-a978-c5da2236455b"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:11:36.855427 kubelet[2717]: I1213 14:11:36.855161 2717 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8d35c42b-9dbf-44ab-a978-c5da2236455b-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "8d35c42b-9dbf-44ab-a978-c5da2236455b" (UID: "8d35c42b-9dbf-44ab-a978-c5da2236455b"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:11:36.859346 kubelet[2717]: I1213 14:11:36.859320 2717 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8d35c42b-9dbf-44ab-a978-c5da2236455b-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "8d35c42b-9dbf-44ab-a978-c5da2236455b" (UID: "8d35c42b-9dbf-44ab-a978-c5da2236455b"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 14:11:36.859472 kubelet[2717]: I1213 14:11:36.859457 2717 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8d35c42b-9dbf-44ab-a978-c5da2236455b-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "8d35c42b-9dbf-44ab-a978-c5da2236455b" (UID: "8d35c42b-9dbf-44ab-a978-c5da2236455b"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:11:36.860763 kubelet[2717]: I1213 14:11:36.860739 2717 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8d35c42b-9dbf-44ab-a978-c5da2236455b-kube-api-access-nnbg2" (OuterVolumeSpecName: "kube-api-access-nnbg2") pod "8d35c42b-9dbf-44ab-a978-c5da2236455b" (UID: "8d35c42b-9dbf-44ab-a978-c5da2236455b"). InnerVolumeSpecName "kube-api-access-nnbg2". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 14:11:36.864981 kubelet[2717]: I1213 14:11:36.864952 2717 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8d35c42b-9dbf-44ab-a978-c5da2236455b-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "8d35c42b-9dbf-44ab-a978-c5da2236455b" (UID: "8d35c42b-9dbf-44ab-a978-c5da2236455b"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 14:11:36.865241 kubelet[2717]: I1213 14:11:36.865209 2717 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8d35c42b-9dbf-44ab-a978-c5da2236455b-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "8d35c42b-9dbf-44ab-a978-c5da2236455b" (UID: "8d35c42b-9dbf-44ab-a978-c5da2236455b"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 14:11:36.865305 kubelet[2717]: I1213 14:11:36.865258 2717 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8d35c42b-9dbf-44ab-a978-c5da2236455b-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "8d35c42b-9dbf-44ab-a978-c5da2236455b" (UID: "8d35c42b-9dbf-44ab-a978-c5da2236455b"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:11:36.868427 kubelet[2717]: I1213 14:11:36.868402 2717 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8d35c42b-9dbf-44ab-a978-c5da2236455b-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "8d35c42b-9dbf-44ab-a978-c5da2236455b" (UID: "8d35c42b-9dbf-44ab-a978-c5da2236455b"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 14:11:36.896962 systemd[1]: Started sshd@23-10.200.20.41:22-10.200.16.10:38882.service. Dec 13 14:11:36.949891 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-ab0dc9858b85112408f1a18bdaf2d1e07be2dcaee1c0b2f2ce34acaf149fbd73-shm.mount: Deactivated successfully. Dec 13 14:11:36.950042 systemd[1]: var-lib-kubelet-pods-8d35c42b\x2d9dbf\x2d44ab\x2da978\x2dc5da2236455b-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dnnbg2.mount: Deactivated successfully. Dec 13 14:11:36.950139 systemd[1]: var-lib-kubelet-pods-8d35c42b\x2d9dbf\x2d44ab\x2da978\x2dc5da2236455b-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Dec 13 14:11:36.950221 systemd[1]: var-lib-kubelet-pods-8d35c42b\x2d9dbf\x2d44ab\x2da978\x2dc5da2236455b-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Dec 13 14:11:36.950310 systemd[1]: var-lib-kubelet-pods-8d35c42b\x2d9dbf\x2d44ab\x2da978\x2dc5da2236455b-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Dec 13 14:11:36.952984 kubelet[2717]: I1213 14:11:36.952951 2717 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/8d35c42b-9dbf-44ab-a978-c5da2236455b-cilium-run\") on node \"ci-3510.3.6-a-18113e8891\" DevicePath \"\"" Dec 13 14:11:36.953097 kubelet[2717]: I1213 14:11:36.952988 2717 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-nnbg2\" (UniqueName: \"kubernetes.io/projected/8d35c42b-9dbf-44ab-a978-c5da2236455b-kube-api-access-nnbg2\") on node \"ci-3510.3.6-a-18113e8891\" DevicePath \"\"" Dec 13 14:11:36.953097 kubelet[2717]: I1213 14:11:36.953000 2717 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/8d35c42b-9dbf-44ab-a978-c5da2236455b-hostproc\") on node \"ci-3510.3.6-a-18113e8891\" DevicePath \"\"" Dec 13 14:11:36.953097 kubelet[2717]: I1213 14:11:36.953011 2717 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/8d35c42b-9dbf-44ab-a978-c5da2236455b-cni-path\") on node \"ci-3510.3.6-a-18113e8891\" DevicePath \"\"" Dec 13 14:11:36.953097 kubelet[2717]: I1213 14:11:36.953029 2717 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/8d35c42b-9dbf-44ab-a978-c5da2236455b-host-proc-sys-kernel\") on node \"ci-3510.3.6-a-18113e8891\" DevicePath \"\"" Dec 13 14:11:36.953097 kubelet[2717]: I1213 14:11:36.953043 2717 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8d35c42b-9dbf-44ab-a978-c5da2236455b-etc-cni-netd\") on node \"ci-3510.3.6-a-18113e8891\" DevicePath \"\"" Dec 13 14:11:36.953097 kubelet[2717]: I1213 14:11:36.953055 2717 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/8d35c42b-9dbf-44ab-a978-c5da2236455b-hubble-tls\") on node \"ci-3510.3.6-a-18113e8891\" DevicePath \"\"" Dec 13 14:11:36.953267 kubelet[2717]: I1213 14:11:36.953119 2717 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8d35c42b-9dbf-44ab-a978-c5da2236455b-xtables-lock\") on node \"ci-3510.3.6-a-18113e8891\" DevicePath \"\"" Dec 13 14:11:36.953267 kubelet[2717]: I1213 14:11:36.953133 2717 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8d35c42b-9dbf-44ab-a978-c5da2236455b-cilium-config-path\") on node \"ci-3510.3.6-a-18113e8891\" DevicePath \"\"" Dec 13 14:11:36.953267 kubelet[2717]: I1213 14:11:36.953143 2717 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/8d35c42b-9dbf-44ab-a978-c5da2236455b-clustermesh-secrets\") on node \"ci-3510.3.6-a-18113e8891\" DevicePath \"\"" Dec 13 14:11:36.953267 kubelet[2717]: I1213 14:11:36.953154 2717 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/8d35c42b-9dbf-44ab-a978-c5da2236455b-host-proc-sys-net\") on node \"ci-3510.3.6-a-18113e8891\" DevicePath \"\"" Dec 13 14:11:36.953267 kubelet[2717]: I1213 14:11:36.953165 2717 reconciler_common.go:300] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/8d35c42b-9dbf-44ab-a978-c5da2236455b-cilium-ipsec-secrets\") on node \"ci-3510.3.6-a-18113e8891\" DevicePath \"\"" Dec 13 14:11:36.953267 kubelet[2717]: I1213 14:11:36.953174 2717 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/8d35c42b-9dbf-44ab-a978-c5da2236455b-bpf-maps\") on node \"ci-3510.3.6-a-18113e8891\" DevicePath \"\"" Dec 13 14:11:36.953267 kubelet[2717]: I1213 14:11:36.953184 2717 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/8d35c42b-9dbf-44ab-a978-c5da2236455b-cilium-cgroup\") on node \"ci-3510.3.6-a-18113e8891\" DevicePath \"\"" Dec 13 14:11:36.953267 kubelet[2717]: I1213 14:11:36.953202 2717 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8d35c42b-9dbf-44ab-a978-c5da2236455b-lib-modules\") on node \"ci-3510.3.6-a-18113e8891\" DevicePath \"\"" Dec 13 14:11:37.323459 kubelet[2717]: E1213 14:11:37.323192 2717 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-76f75df574-hcgrk" podUID="dd65ee0e-d2aa-4d32-9f34-1b5a4fd6ca58" Dec 13 14:11:37.341903 sshd[4604]: Accepted publickey for core from 10.200.16.10 port 38882 ssh2: RSA SHA256:xuCpWY3jYETt01AJgPfaKRWNP61F/EGSdrGlXn/pObI Dec 13 14:11:37.343241 sshd[4604]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:11:37.347356 systemd-logind[1542]: New session 26 of user core. Dec 13 14:11:37.347707 systemd[1]: Started session-26.scope. Dec 13 14:11:37.443503 kubelet[2717]: E1213 14:11:37.443471 2717 kubelet.go:2892] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Dec 13 14:11:37.783417 kubelet[2717]: I1213 14:11:37.783387 2717 scope.go:117] "RemoveContainer" containerID="29391162a3fd808561218bccab01e6e99342cab21cdf007adb01328cf00698c5" Dec 13 14:11:37.784812 env[1556]: time="2024-12-13T14:11:37.784552308Z" level=info msg="RemoveContainer for \"29391162a3fd808561218bccab01e6e99342cab21cdf007adb01328cf00698c5\"" Dec 13 14:11:37.793629 env[1556]: time="2024-12-13T14:11:37.793532835Z" level=info msg="RemoveContainer for \"29391162a3fd808561218bccab01e6e99342cab21cdf007adb01328cf00698c5\" returns successfully" Dec 13 14:11:37.823680 kubelet[2717]: I1213 14:11:37.823634 2717 topology_manager.go:215] "Topology Admit Handler" podUID="a3851dc2-f01c-4be1-ac90-cb3064195fd0" podNamespace="kube-system" podName="cilium-cntfb" Dec 13 14:11:37.823829 kubelet[2717]: E1213 14:11:37.823702 2717 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="8d35c42b-9dbf-44ab-a978-c5da2236455b" containerName="mount-cgroup" Dec 13 14:11:37.823829 kubelet[2717]: I1213 14:11:37.823728 2717 memory_manager.go:354] "RemoveStaleState removing state" podUID="8d35c42b-9dbf-44ab-a978-c5da2236455b" containerName="mount-cgroup" Dec 13 14:11:37.857832 kubelet[2717]: I1213 14:11:37.857794 2717 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/a3851dc2-f01c-4be1-ac90-cb3064195fd0-bpf-maps\") pod \"cilium-cntfb\" (UID: \"a3851dc2-f01c-4be1-ac90-cb3064195fd0\") " pod="kube-system/cilium-cntfb" Dec 13 14:11:37.858204 kubelet[2717]: I1213 14:11:37.857841 2717 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a3851dc2-f01c-4be1-ac90-cb3064195fd0-cilium-config-path\") pod \"cilium-cntfb\" (UID: \"a3851dc2-f01c-4be1-ac90-cb3064195fd0\") " pod="kube-system/cilium-cntfb" Dec 13 14:11:37.858204 kubelet[2717]: I1213 14:11:37.857875 2717 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/a3851dc2-f01c-4be1-ac90-cb3064195fd0-cni-path\") pod \"cilium-cntfb\" (UID: \"a3851dc2-f01c-4be1-ac90-cb3064195fd0\") " pod="kube-system/cilium-cntfb" Dec 13 14:11:37.858204 kubelet[2717]: I1213 14:11:37.857894 2717 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/a3851dc2-f01c-4be1-ac90-cb3064195fd0-hubble-tls\") pod \"cilium-cntfb\" (UID: \"a3851dc2-f01c-4be1-ac90-cb3064195fd0\") " pod="kube-system/cilium-cntfb" Dec 13 14:11:37.858204 kubelet[2717]: I1213 14:11:37.857913 2717 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/a3851dc2-f01c-4be1-ac90-cb3064195fd0-cilium-run\") pod \"cilium-cntfb\" (UID: \"a3851dc2-f01c-4be1-ac90-cb3064195fd0\") " pod="kube-system/cilium-cntfb" Dec 13 14:11:37.858204 kubelet[2717]: I1213 14:11:37.857943 2717 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/a3851dc2-f01c-4be1-ac90-cb3064195fd0-clustermesh-secrets\") pod \"cilium-cntfb\" (UID: \"a3851dc2-f01c-4be1-ac90-cb3064195fd0\") " pod="kube-system/cilium-cntfb" Dec 13 14:11:37.858204 kubelet[2717]: I1213 14:11:37.857962 2717 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/a3851dc2-f01c-4be1-ac90-cb3064195fd0-host-proc-sys-net\") pod \"cilium-cntfb\" (UID: \"a3851dc2-f01c-4be1-ac90-cb3064195fd0\") " pod="kube-system/cilium-cntfb" Dec 13 14:11:37.858355 kubelet[2717]: I1213 14:11:37.857980 2717 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a3851dc2-f01c-4be1-ac90-cb3064195fd0-lib-modules\") pod \"cilium-cntfb\" (UID: \"a3851dc2-f01c-4be1-ac90-cb3064195fd0\") " pod="kube-system/cilium-cntfb" Dec 13 14:11:37.858355 kubelet[2717]: I1213 14:11:37.858001 2717 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/a3851dc2-f01c-4be1-ac90-cb3064195fd0-hostproc\") pod \"cilium-cntfb\" (UID: \"a3851dc2-f01c-4be1-ac90-cb3064195fd0\") " pod="kube-system/cilium-cntfb" Dec 13 14:11:37.858355 kubelet[2717]: I1213 14:11:37.858029 2717 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a3851dc2-f01c-4be1-ac90-cb3064195fd0-etc-cni-netd\") pod \"cilium-cntfb\" (UID: \"a3851dc2-f01c-4be1-ac90-cb3064195fd0\") " pod="kube-system/cilium-cntfb" Dec 13 14:11:37.858355 kubelet[2717]: I1213 14:11:37.858049 2717 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/a3851dc2-f01c-4be1-ac90-cb3064195fd0-cilium-cgroup\") pod \"cilium-cntfb\" (UID: \"a3851dc2-f01c-4be1-ac90-cb3064195fd0\") " pod="kube-system/cilium-cntfb" Dec 13 14:11:37.858355 kubelet[2717]: I1213 14:11:37.858083 2717 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a3851dc2-f01c-4be1-ac90-cb3064195fd0-xtables-lock\") pod \"cilium-cntfb\" (UID: \"a3851dc2-f01c-4be1-ac90-cb3064195fd0\") " pod="kube-system/cilium-cntfb" Dec 13 14:11:37.858355 kubelet[2717]: I1213 14:11:37.858105 2717 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/a3851dc2-f01c-4be1-ac90-cb3064195fd0-cilium-ipsec-secrets\") pod \"cilium-cntfb\" (UID: \"a3851dc2-f01c-4be1-ac90-cb3064195fd0\") " pod="kube-system/cilium-cntfb" Dec 13 14:11:37.858490 kubelet[2717]: I1213 14:11:37.858124 2717 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ndr22\" (UniqueName: \"kubernetes.io/projected/a3851dc2-f01c-4be1-ac90-cb3064195fd0-kube-api-access-ndr22\") pod \"cilium-cntfb\" (UID: \"a3851dc2-f01c-4be1-ac90-cb3064195fd0\") " pod="kube-system/cilium-cntfb" Dec 13 14:11:37.858490 kubelet[2717]: I1213 14:11:37.858155 2717 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/a3851dc2-f01c-4be1-ac90-cb3064195fd0-host-proc-sys-kernel\") pod \"cilium-cntfb\" (UID: \"a3851dc2-f01c-4be1-ac90-cb3064195fd0\") " pod="kube-system/cilium-cntfb" Dec 13 14:11:38.134515 env[1556]: time="2024-12-13T14:11:38.133924633Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-cntfb,Uid:a3851dc2-f01c-4be1-ac90-cb3064195fd0,Namespace:kube-system,Attempt:0,}" Dec 13 14:11:38.169362 env[1556]: time="2024-12-13T14:11:38.169295940Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:11:38.169616 env[1556]: time="2024-12-13T14:11:38.169340020Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:11:38.169616 env[1556]: time="2024-12-13T14:11:38.169351821Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:11:38.169616 env[1556]: time="2024-12-13T14:11:38.169548983Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/6a8d59ca4434a845fcfa1d84a040823fdef2068c0f3ecf79223162bdf11dee0b pid=4626 runtime=io.containerd.runc.v2 Dec 13 14:11:38.208556 env[1556]: time="2024-12-13T14:11:38.208521217Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-cntfb,Uid:a3851dc2-f01c-4be1-ac90-cb3064195fd0,Namespace:kube-system,Attempt:0,} returns sandbox id \"6a8d59ca4434a845fcfa1d84a040823fdef2068c0f3ecf79223162bdf11dee0b\"" Dec 13 14:11:38.212497 env[1556]: time="2024-12-13T14:11:38.212465069Z" level=info msg="CreateContainer within sandbox \"6a8d59ca4434a845fcfa1d84a040823fdef2068c0f3ecf79223162bdf11dee0b\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Dec 13 14:11:38.251665 env[1556]: time="2024-12-13T14:11:38.251623906Z" level=info msg="CreateContainer within sandbox \"6a8d59ca4434a845fcfa1d84a040823fdef2068c0f3ecf79223162bdf11dee0b\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"2366e2e9f21632890cbd68ee3011bc56e62e9dfd9c37c75f77e93dcf2bb34efd\"" Dec 13 14:11:38.253429 env[1556]: time="2024-12-13T14:11:38.252471277Z" level=info msg="StartContainer for \"2366e2e9f21632890cbd68ee3011bc56e62e9dfd9c37c75f77e93dcf2bb34efd\"" Dec 13 14:11:38.304028 env[1556]: time="2024-12-13T14:11:38.303989997Z" level=info msg="StartContainer for \"2366e2e9f21632890cbd68ee3011bc56e62e9dfd9c37c75f77e93dcf2bb34efd\" returns successfully" Dec 13 14:11:38.354358 env[1556]: time="2024-12-13T14:11:38.354309621Z" level=info msg="shim disconnected" id=2366e2e9f21632890cbd68ee3011bc56e62e9dfd9c37c75f77e93dcf2bb34efd Dec 13 14:11:38.354358 env[1556]: time="2024-12-13T14:11:38.354355702Z" level=warning msg="cleaning up after shim disconnected" id=2366e2e9f21632890cbd68ee3011bc56e62e9dfd9c37c75f77e93dcf2bb34efd namespace=k8s.io Dec 13 14:11:38.354585 env[1556]: time="2024-12-13T14:11:38.354364782Z" level=info msg="cleaning up dead shim" Dec 13 14:11:38.360947 env[1556]: time="2024-12-13T14:11:38.360901388Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:11:38Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4712 runtime=io.containerd.runc.v2\n" Dec 13 14:11:38.790014 env[1556]: time="2024-12-13T14:11:38.789971249Z" level=info msg="CreateContainer within sandbox \"6a8d59ca4434a845fcfa1d84a040823fdef2068c0f3ecf79223162bdf11dee0b\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Dec 13 14:11:38.822773 env[1556]: time="2024-12-13T14:11:38.822723241Z" level=info msg="CreateContainer within sandbox \"6a8d59ca4434a845fcfa1d84a040823fdef2068c0f3ecf79223162bdf11dee0b\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"a45dacf647ca0fa80c1719bca216fb7837bcec9a0ae1c06ae4cb414d11e4d9ca\"" Dec 13 14:11:38.823481 env[1556]: time="2024-12-13T14:11:38.823443931Z" level=info msg="StartContainer for \"a45dacf647ca0fa80c1719bca216fb7837bcec9a0ae1c06ae4cb414d11e4d9ca\"" Dec 13 14:11:38.866041 env[1556]: time="2024-12-13T14:11:38.866004092Z" level=info msg="StartContainer for \"a45dacf647ca0fa80c1719bca216fb7837bcec9a0ae1c06ae4cb414d11e4d9ca\" returns successfully" Dec 13 14:11:38.897971 env[1556]: time="2024-12-13T14:11:38.897922034Z" level=info msg="shim disconnected" id=a45dacf647ca0fa80c1719bca216fb7837bcec9a0ae1c06ae4cb414d11e4d9ca Dec 13 14:11:38.897971 env[1556]: time="2024-12-13T14:11:38.897967994Z" level=warning msg="cleaning up after shim disconnected" id=a45dacf647ca0fa80c1719bca216fb7837bcec9a0ae1c06ae4cb414d11e4d9ca namespace=k8s.io Dec 13 14:11:38.897971 env[1556]: time="2024-12-13T14:11:38.897977074Z" level=info msg="cleaning up dead shim" Dec 13 14:11:38.904874 env[1556]: time="2024-12-13T14:11:38.904832525Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:11:38Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4775 runtime=io.containerd.runc.v2\n" Dec 13 14:11:39.322804 kubelet[2717]: E1213 14:11:39.322731 2717 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-76f75df574-hcgrk" podUID="dd65ee0e-d2aa-4d32-9f34-1b5a4fd6ca58" Dec 13 14:11:39.326260 kubelet[2717]: I1213 14:11:39.325978 2717 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="8d35c42b-9dbf-44ab-a978-c5da2236455b" path="/var/lib/kubelet/pods/8d35c42b-9dbf-44ab-a978-c5da2236455b/volumes" Dec 13 14:11:39.799699 env[1556]: time="2024-12-13T14:11:39.799654341Z" level=info msg="CreateContainer within sandbox \"6a8d59ca4434a845fcfa1d84a040823fdef2068c0f3ecf79223162bdf11dee0b\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Dec 13 14:11:39.828761 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3621268154.mount: Deactivated successfully. Dec 13 14:11:39.834462 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1582850304.mount: Deactivated successfully. Dec 13 14:11:39.844563 env[1556]: time="2024-12-13T14:11:39.844519574Z" level=info msg="CreateContainer within sandbox \"6a8d59ca4434a845fcfa1d84a040823fdef2068c0f3ecf79223162bdf11dee0b\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"a2534b137e0027c931268a14440605c6951af09ac11d7d9dc3093b7c3191c4f5\"" Dec 13 14:11:39.845162 env[1556]: time="2024-12-13T14:11:39.845137021Z" level=info msg="StartContainer for \"a2534b137e0027c931268a14440605c6951af09ac11d7d9dc3093b7c3191c4f5\"" Dec 13 14:11:39.895909 env[1556]: time="2024-12-13T14:11:39.895868205Z" level=info msg="StartContainer for \"a2534b137e0027c931268a14440605c6951af09ac11d7d9dc3093b7c3191c4f5\" returns successfully" Dec 13 14:11:39.936665 env[1556]: time="2024-12-13T14:11:39.936614907Z" level=info msg="shim disconnected" id=a2534b137e0027c931268a14440605c6951af09ac11d7d9dc3093b7c3191c4f5 Dec 13 14:11:39.936665 env[1556]: time="2024-12-13T14:11:39.936660187Z" level=warning msg="cleaning up after shim disconnected" id=a2534b137e0027c931268a14440605c6951af09ac11d7d9dc3093b7c3191c4f5 namespace=k8s.io Dec 13 14:11:39.936665 env[1556]: time="2024-12-13T14:11:39.936669307Z" level=info msg="cleaning up dead shim" Dec 13 14:11:39.943579 env[1556]: time="2024-12-13T14:11:39.943535032Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:11:39Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4833 runtime=io.containerd.runc.v2\n" Dec 13 14:11:40.805899 env[1556]: time="2024-12-13T14:11:40.805858099Z" level=info msg="CreateContainer within sandbox \"6a8d59ca4434a845fcfa1d84a040823fdef2068c0f3ecf79223162bdf11dee0b\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Dec 13 14:11:40.831939 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3428642646.mount: Deactivated successfully. Dec 13 14:11:40.842802 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2088198905.mount: Deactivated successfully. Dec 13 14:11:40.852161 env[1556]: time="2024-12-13T14:11:40.852111747Z" level=info msg="CreateContainer within sandbox \"6a8d59ca4434a845fcfa1d84a040823fdef2068c0f3ecf79223162bdf11dee0b\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"fdab7f4a3a752cb2d410a6ea2864217de7d645cbde42729f731b1c957e3968ba\"" Dec 13 14:11:40.853151 env[1556]: time="2024-12-13T14:11:40.853127799Z" level=info msg="StartContainer for \"fdab7f4a3a752cb2d410a6ea2864217de7d645cbde42729f731b1c957e3968ba\"" Dec 13 14:11:40.896852 env[1556]: time="2024-12-13T14:11:40.896811058Z" level=info msg="StartContainer for \"fdab7f4a3a752cb2d410a6ea2864217de7d645cbde42729f731b1c957e3968ba\" returns successfully" Dec 13 14:11:40.923008 env[1556]: time="2024-12-13T14:11:40.922965597Z" level=info msg="shim disconnected" id=fdab7f4a3a752cb2d410a6ea2864217de7d645cbde42729f731b1c957e3968ba Dec 13 14:11:40.923320 env[1556]: time="2024-12-13T14:11:40.923291561Z" level=warning msg="cleaning up after shim disconnected" id=fdab7f4a3a752cb2d410a6ea2864217de7d645cbde42729f731b1c957e3968ba namespace=k8s.io Dec 13 14:11:40.923422 env[1556]: time="2024-12-13T14:11:40.923408162Z" level=info msg="cleaning up dead shim" Dec 13 14:11:40.929591 env[1556]: time="2024-12-13T14:11:40.929561193Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:11:40Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4888 runtime=io.containerd.runc.v2\n" Dec 13 14:11:41.323007 kubelet[2717]: E1213 14:11:41.322976 2717 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-76f75df574-hcgrk" podUID="dd65ee0e-d2aa-4d32-9f34-1b5a4fd6ca58" Dec 13 14:11:41.812345 env[1556]: time="2024-12-13T14:11:41.812302103Z" level=info msg="CreateContainer within sandbox \"6a8d59ca4434a845fcfa1d84a040823fdef2068c0f3ecf79223162bdf11dee0b\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Dec 13 14:11:41.863249 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1864215918.mount: Deactivated successfully. Dec 13 14:11:41.915567 env[1556]: time="2024-12-13T14:11:41.915517754Z" level=info msg="CreateContainer within sandbox \"6a8d59ca4434a845fcfa1d84a040823fdef2068c0f3ecf79223162bdf11dee0b\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"4536714fb1fa1efa215f972f94091bed44a3752b9995d69ad370d6777b87dc54\"" Dec 13 14:11:41.917216 env[1556]: time="2024-12-13T14:11:41.916146120Z" level=info msg="StartContainer for \"4536714fb1fa1efa215f972f94091bed44a3752b9995d69ad370d6777b87dc54\"" Dec 13 14:11:41.927854 kubelet[2717]: I1213 14:11:41.927835 2717 setters.go:568] "Node became not ready" node="ci-3510.3.6-a-18113e8891" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2024-12-13T14:11:41Z","lastTransitionTime":"2024-12-13T14:11:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Dec 13 14:11:41.983194 env[1556]: time="2024-12-13T14:11:41.983130508Z" level=info msg="StartContainer for \"4536714fb1fa1efa215f972f94091bed44a3752b9995d69ad370d6777b87dc54\" returns successfully" Dec 13 14:11:42.005807 systemd[1]: run-containerd-runc-k8s.io-4536714fb1fa1efa215f972f94091bed44a3752b9995d69ad370d6777b87dc54-runc.U4dDhL.mount: Deactivated successfully. Dec 13 14:11:42.244153 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106(gcm-aes-ce))) Dec 13 14:11:43.805449 systemd[1]: run-containerd-runc-k8s.io-4536714fb1fa1efa215f972f94091bed44a3752b9995d69ad370d6777b87dc54-runc.FPQJJj.mount: Deactivated successfully. Dec 13 14:11:44.850596 systemd-networkd[1743]: lxc_health: Link UP Dec 13 14:11:44.875279 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Dec 13 14:11:44.886222 systemd-networkd[1743]: lxc_health: Gained carrier Dec 13 14:11:46.154007 kubelet[2717]: I1213 14:11:46.153975 2717 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-cntfb" podStartSLOduration=9.153934962 podStartE2EDuration="9.153934962s" podCreationTimestamp="2024-12-13 14:11:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 14:11:42.826718762 +0000 UTC m=+215.610694594" watchObservedRunningTime="2024-12-13 14:11:46.153934962 +0000 UTC m=+218.937910794" Dec 13 14:11:46.416215 systemd-networkd[1743]: lxc_health: Gained IPv6LL Dec 13 14:11:48.043941 systemd[1]: run-containerd-runc-k8s.io-4536714fb1fa1efa215f972f94091bed44a3752b9995d69ad370d6777b87dc54-runc.qPcm9W.mount: Deactivated successfully. Dec 13 14:11:50.164256 systemd[1]: run-containerd-runc-k8s.io-4536714fb1fa1efa215f972f94091bed44a3752b9995d69ad370d6777b87dc54-runc.OGKPx7.mount: Deactivated successfully. Dec 13 14:11:52.285415 systemd[1]: run-containerd-runc-k8s.io-4536714fb1fa1efa215f972f94091bed44a3752b9995d69ad370d6777b87dc54-runc.GvFz1p.mount: Deactivated successfully. Dec 13 14:11:54.408521 systemd[1]: run-containerd-runc-k8s.io-4536714fb1fa1efa215f972f94091bed44a3752b9995d69ad370d6777b87dc54-runc.uNiekv.mount: Deactivated successfully. Dec 13 14:11:56.529520 systemd[1]: run-containerd-runc-k8s.io-4536714fb1fa1efa215f972f94091bed44a3752b9995d69ad370d6777b87dc54-runc.ntPARs.mount: Deactivated successfully. Dec 13 14:11:58.792212 sshd[4604]: pam_unix(sshd:session): session closed for user core Dec 13 14:11:58.794652 systemd[1]: sshd@23-10.200.20.41:22-10.200.16.10:38882.service: Deactivated successfully. Dec 13 14:11:58.795406 systemd[1]: session-26.scope: Deactivated successfully. Dec 13 14:11:58.796454 systemd-logind[1542]: Session 26 logged out. Waiting for processes to exit. Dec 13 14:11:58.797260 systemd-logind[1542]: Removed session 26.