Feb 12 19:14:31.006293 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Feb 12 19:14:31.006312 kernel: Linux version 5.15.148-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP PREEMPT Mon Feb 12 18:07:00 -00 2024 Feb 12 19:14:31.006320 kernel: earlycon: pl11 at MMIO 0x00000000effec000 (options '') Feb 12 19:14:31.006327 kernel: printk: bootconsole [pl11] enabled Feb 12 19:14:31.006332 kernel: efi: EFI v2.70 by EDK II Feb 12 19:14:31.006337 kernel: efi: ACPI 2.0=0x3fd89018 SMBIOS=0x3fd66000 SMBIOS 3.0=0x3fd64000 MEMATTR=0x3ef3c198 RNG=0x3fd89998 MEMRESERVE=0x37e73f98 Feb 12 19:14:31.006343 kernel: random: crng init done Feb 12 19:14:31.006349 kernel: ACPI: Early table checksum verification disabled Feb 12 19:14:31.006354 kernel: ACPI: RSDP 0x000000003FD89018 000024 (v02 VRTUAL) Feb 12 19:14:31.006360 kernel: ACPI: XSDT 0x000000003FD89F18 00006C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 12 19:14:31.006365 kernel: ACPI: FACP 0x000000003FD89C18 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 12 19:14:31.006372 kernel: ACPI: DSDT 0x000000003EBD2018 01DEC0 (v02 MSFTVM DSDT01 00000001 MSFT 05000000) Feb 12 19:14:31.013428 kernel: ACPI: DBG2 0x000000003FD89B18 000072 (v00 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 12 19:14:31.013438 kernel: ACPI: GTDT 0x000000003FD89D98 000060 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 12 19:14:31.013445 kernel: ACPI: OEM0 0x000000003FD89098 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 12 19:14:31.013452 kernel: ACPI: SPCR 0x000000003FD89A98 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 12 19:14:31.013458 kernel: ACPI: APIC 0x000000003FD89818 0000FC (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 12 19:14:31.013469 kernel: ACPI: SRAT 0x000000003FD89198 000234 (v03 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 12 19:14:31.013475 kernel: ACPI: PPTT 0x000000003FD89418 000120 (v01 VRTUAL MICROSFT 00000000 MSFT 00000000) Feb 12 19:14:31.013481 kernel: ACPI: BGRT 0x000000003FD89E98 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 12 19:14:31.013487 kernel: ACPI: SPCR: console: pl011,mmio32,0xeffec000,115200 Feb 12 19:14:31.013493 kernel: NUMA: Failed to initialise from firmware Feb 12 19:14:31.013499 kernel: NUMA: Faking a node at [mem 0x0000000000000000-0x00000001bfffffff] Feb 12 19:14:31.013505 kernel: NUMA: NODE_DATA [mem 0x1bf7f1900-0x1bf7f6fff] Feb 12 19:14:31.013510 kernel: Zone ranges: Feb 12 19:14:31.013516 kernel: DMA [mem 0x0000000000000000-0x00000000ffffffff] Feb 12 19:14:31.013522 kernel: DMA32 empty Feb 12 19:14:31.013529 kernel: Normal [mem 0x0000000100000000-0x00000001bfffffff] Feb 12 19:14:31.013540 kernel: Movable zone start for each node Feb 12 19:14:31.013546 kernel: Early memory node ranges Feb 12 19:14:31.013552 kernel: node 0: [mem 0x0000000000000000-0x00000000007fffff] Feb 12 19:14:31.013558 kernel: node 0: [mem 0x0000000000824000-0x000000003ec80fff] Feb 12 19:14:31.013564 kernel: node 0: [mem 0x000000003ec81000-0x000000003eca9fff] Feb 12 19:14:31.013569 kernel: node 0: [mem 0x000000003ecaa000-0x000000003fd29fff] Feb 12 19:14:31.013575 kernel: node 0: [mem 0x000000003fd2a000-0x000000003fd7dfff] Feb 12 19:14:31.013581 kernel: node 0: [mem 0x000000003fd7e000-0x000000003fd89fff] Feb 12 19:14:31.013586 kernel: node 0: [mem 0x000000003fd8a000-0x000000003fd8dfff] Feb 12 19:14:31.013592 kernel: node 0: [mem 0x000000003fd8e000-0x000000003fffffff] Feb 12 19:14:31.013598 kernel: node 0: [mem 0x0000000100000000-0x00000001bfffffff] Feb 12 19:14:31.013605 kernel: Initmem setup node 0 [mem 0x0000000000000000-0x00000001bfffffff] Feb 12 19:14:31.013613 kernel: On node 0, zone DMA: 36 pages in unavailable ranges Feb 12 19:14:31.013619 kernel: psci: probing for conduit method from ACPI. Feb 12 19:14:31.013626 kernel: psci: PSCIv1.1 detected in firmware. Feb 12 19:14:31.013632 kernel: psci: Using standard PSCI v0.2 function IDs Feb 12 19:14:31.013639 kernel: psci: MIGRATE_INFO_TYPE not supported. Feb 12 19:14:31.013645 kernel: psci: SMC Calling Convention v1.4 Feb 12 19:14:31.013651 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x0 -> Node -1 Feb 12 19:14:31.013657 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1 -> Node -1 Feb 12 19:14:31.013663 kernel: percpu: Embedded 29 pages/cpu s79960 r8192 d30632 u118784 Feb 12 19:14:31.013669 kernel: pcpu-alloc: s79960 r8192 d30632 u118784 alloc=29*4096 Feb 12 19:14:31.013676 kernel: pcpu-alloc: [0] 0 [0] 1 Feb 12 19:14:31.013682 kernel: Detected PIPT I-cache on CPU0 Feb 12 19:14:31.013688 kernel: CPU features: detected: GIC system register CPU interface Feb 12 19:14:31.013694 kernel: CPU features: detected: Hardware dirty bit management Feb 12 19:14:31.013700 kernel: CPU features: detected: Spectre-BHB Feb 12 19:14:31.013706 kernel: CPU features: kernel page table isolation forced ON by KASLR Feb 12 19:14:31.013714 kernel: CPU features: detected: Kernel page table isolation (KPTI) Feb 12 19:14:31.013720 kernel: CPU features: detected: ARM erratum 1418040 Feb 12 19:14:31.013726 kernel: CPU features: detected: ARM erratum 1542419 (kernel portion) Feb 12 19:14:31.013732 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1032156 Feb 12 19:14:31.013738 kernel: Policy zone: Normal Feb 12 19:14:31.013746 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=0a07ee1673be713cb46dc1305004c8854c4690dc8835a87e3bc71aa6c6a62e40 Feb 12 19:14:31.013753 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Feb 12 19:14:31.013759 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Feb 12 19:14:31.013765 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Feb 12 19:14:31.013772 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Feb 12 19:14:31.013779 kernel: software IO TLB: mapped [mem 0x000000003abd2000-0x000000003ebd2000] (64MB) Feb 12 19:14:31.013786 kernel: Memory: 3991932K/4194160K available (9792K kernel code, 2092K rwdata, 7556K rodata, 34688K init, 778K bss, 202228K reserved, 0K cma-reserved) Feb 12 19:14:31.013792 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Feb 12 19:14:31.013798 kernel: trace event string verifier disabled Feb 12 19:14:31.013804 kernel: rcu: Preemptible hierarchical RCU implementation. Feb 12 19:14:31.013810 kernel: rcu: RCU event tracing is enabled. Feb 12 19:14:31.013817 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Feb 12 19:14:31.013823 kernel: Trampoline variant of Tasks RCU enabled. Feb 12 19:14:31.013829 kernel: Tracing variant of Tasks RCU enabled. Feb 12 19:14:31.013835 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Feb 12 19:14:31.013841 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Feb 12 19:14:31.013849 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Feb 12 19:14:31.013855 kernel: GICv3: 960 SPIs implemented Feb 12 19:14:31.013861 kernel: GICv3: 0 Extended SPIs implemented Feb 12 19:14:31.013867 kernel: GICv3: Distributor has no Range Selector support Feb 12 19:14:31.013873 kernel: Root IRQ handler: gic_handle_irq Feb 12 19:14:31.013879 kernel: GICv3: 16 PPIs implemented Feb 12 19:14:31.013885 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000effee000 Feb 12 19:14:31.013891 kernel: ITS: No ITS available, not enabling LPIs Feb 12 19:14:31.013897 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 12 19:14:31.013903 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Feb 12 19:14:31.013910 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Feb 12 19:14:31.013916 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Feb 12 19:14:31.013924 kernel: Console: colour dummy device 80x25 Feb 12 19:14:31.013930 kernel: printk: console [tty1] enabled Feb 12 19:14:31.013937 kernel: ACPI: Core revision 20210730 Feb 12 19:14:31.013944 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Feb 12 19:14:31.013950 kernel: pid_max: default: 32768 minimum: 301 Feb 12 19:14:31.013956 kernel: LSM: Security Framework initializing Feb 12 19:14:31.013963 kernel: SELinux: Initializing. Feb 12 19:14:31.013969 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 12 19:14:31.013976 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 12 19:14:31.013983 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3a8030, hints 0xe, misc 0x31e1 Feb 12 19:14:31.013990 kernel: Hyper-V: Host Build 10.0.22477.1369-1-0 Feb 12 19:14:31.013996 kernel: rcu: Hierarchical SRCU implementation. Feb 12 19:14:31.014002 kernel: Remapping and enabling EFI services. Feb 12 19:14:31.014009 kernel: smp: Bringing up secondary CPUs ... Feb 12 19:14:31.014015 kernel: Detected PIPT I-cache on CPU1 Feb 12 19:14:31.014022 kernel: GICv3: CPU1: found redistributor 1 region 1:0x00000000f000e000 Feb 12 19:14:31.014028 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 12 19:14:31.014035 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Feb 12 19:14:31.014042 kernel: smp: Brought up 1 node, 2 CPUs Feb 12 19:14:31.014048 kernel: SMP: Total of 2 processors activated. Feb 12 19:14:31.014054 kernel: CPU features: detected: 32-bit EL0 Support Feb 12 19:14:31.014061 kernel: CPU features: detected: Instruction cache invalidation not required for I/D coherence Feb 12 19:14:31.014067 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Feb 12 19:14:31.014074 kernel: CPU features: detected: CRC32 instructions Feb 12 19:14:31.014080 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Feb 12 19:14:31.014087 kernel: CPU features: detected: LSE atomic instructions Feb 12 19:14:31.014093 kernel: CPU features: detected: Privileged Access Never Feb 12 19:14:31.014100 kernel: CPU: All CPU(s) started at EL1 Feb 12 19:14:31.014107 kernel: alternatives: patching kernel code Feb 12 19:14:31.014118 kernel: devtmpfs: initialized Feb 12 19:14:31.014126 kernel: KASLR enabled Feb 12 19:14:31.014133 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Feb 12 19:14:31.014140 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Feb 12 19:14:31.014147 kernel: pinctrl core: initialized pinctrl subsystem Feb 12 19:14:31.014153 kernel: SMBIOS 3.1.0 present. Feb 12 19:14:31.014160 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 07/12/2023 Feb 12 19:14:31.014167 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Feb 12 19:14:31.014175 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Feb 12 19:14:31.014182 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Feb 12 19:14:31.014189 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Feb 12 19:14:31.014195 kernel: audit: initializing netlink subsys (disabled) Feb 12 19:14:31.014202 kernel: audit: type=2000 audit(0.089:1): state=initialized audit_enabled=0 res=1 Feb 12 19:14:31.014209 kernel: thermal_sys: Registered thermal governor 'step_wise' Feb 12 19:14:31.014215 kernel: cpuidle: using governor menu Feb 12 19:14:31.014223 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Feb 12 19:14:31.014230 kernel: ASID allocator initialised with 32768 entries Feb 12 19:14:31.014237 kernel: ACPI: bus type PCI registered Feb 12 19:14:31.014243 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Feb 12 19:14:31.014250 kernel: Serial: AMBA PL011 UART driver Feb 12 19:14:31.014257 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Feb 12 19:14:31.014263 kernel: HugeTLB registered 32.0 MiB page size, pre-allocated 0 pages Feb 12 19:14:31.014270 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Feb 12 19:14:31.014277 kernel: HugeTLB registered 64.0 KiB page size, pre-allocated 0 pages Feb 12 19:14:31.014284 kernel: cryptd: max_cpu_qlen set to 1000 Feb 12 19:14:31.014291 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Feb 12 19:14:31.014298 kernel: ACPI: Added _OSI(Module Device) Feb 12 19:14:31.014305 kernel: ACPI: Added _OSI(Processor Device) Feb 12 19:14:31.014311 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Feb 12 19:14:31.014318 kernel: ACPI: Added _OSI(Processor Aggregator Device) Feb 12 19:14:31.014325 kernel: ACPI: Added _OSI(Linux-Dell-Video) Feb 12 19:14:31.014331 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Feb 12 19:14:31.014338 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Feb 12 19:14:31.014346 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Feb 12 19:14:31.014352 kernel: ACPI: Interpreter enabled Feb 12 19:14:31.014359 kernel: ACPI: Using GIC for interrupt routing Feb 12 19:14:31.014366 kernel: ARMH0011:00: ttyAMA0 at MMIO 0xeffec000 (irq = 12, base_baud = 0) is a SBSA Feb 12 19:14:31.014372 kernel: printk: console [ttyAMA0] enabled Feb 12 19:14:31.014395 kernel: printk: bootconsole [pl11] disabled Feb 12 19:14:31.014401 kernel: ARMH0011:01: ttyAMA1 at MMIO 0xeffeb000 (irq = 13, base_baud = 0) is a SBSA Feb 12 19:14:31.014411 kernel: iommu: Default domain type: Translated Feb 12 19:14:31.014418 kernel: iommu: DMA domain TLB invalidation policy: strict mode Feb 12 19:14:31.014426 kernel: vgaarb: loaded Feb 12 19:14:31.014432 kernel: pps_core: LinuxPPS API ver. 1 registered Feb 12 19:14:31.014439 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Feb 12 19:14:31.014446 kernel: PTP clock support registered Feb 12 19:14:31.014452 kernel: Registered efivars operations Feb 12 19:14:31.014459 kernel: No ACPI PMU IRQ for CPU0 Feb 12 19:14:31.014466 kernel: No ACPI PMU IRQ for CPU1 Feb 12 19:14:31.014473 kernel: clocksource: Switched to clocksource arch_sys_counter Feb 12 19:14:31.014479 kernel: VFS: Disk quotas dquot_6.6.0 Feb 12 19:14:31.014488 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Feb 12 19:14:31.014494 kernel: pnp: PnP ACPI init Feb 12 19:14:31.014501 kernel: pnp: PnP ACPI: found 0 devices Feb 12 19:14:31.014508 kernel: NET: Registered PF_INET protocol family Feb 12 19:14:31.014514 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Feb 12 19:14:31.014521 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Feb 12 19:14:31.014528 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Feb 12 19:14:31.014535 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Feb 12 19:14:31.014542 kernel: TCP bind hash table entries: 32768 (order: 7, 524288 bytes, linear) Feb 12 19:14:31.014550 kernel: TCP: Hash tables configured (established 32768 bind 32768) Feb 12 19:14:31.014557 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 12 19:14:31.014564 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 12 19:14:31.014570 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Feb 12 19:14:31.014577 kernel: PCI: CLS 0 bytes, default 64 Feb 12 19:14:31.014584 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 1 counters available Feb 12 19:14:31.014591 kernel: kvm [1]: HYP mode not available Feb 12 19:14:31.014597 kernel: Initialise system trusted keyrings Feb 12 19:14:31.014604 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Feb 12 19:14:31.014612 kernel: Key type asymmetric registered Feb 12 19:14:31.014619 kernel: Asymmetric key parser 'x509' registered Feb 12 19:14:31.014625 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Feb 12 19:14:31.014632 kernel: io scheduler mq-deadline registered Feb 12 19:14:31.014639 kernel: io scheduler kyber registered Feb 12 19:14:31.014645 kernel: io scheduler bfq registered Feb 12 19:14:31.014652 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Feb 12 19:14:31.014659 kernel: thunder_xcv, ver 1.0 Feb 12 19:14:31.014665 kernel: thunder_bgx, ver 1.0 Feb 12 19:14:31.014673 kernel: nicpf, ver 1.0 Feb 12 19:14:31.014680 kernel: nicvf, ver 1.0 Feb 12 19:14:31.020548 kernel: rtc-efi rtc-efi.0: registered as rtc0 Feb 12 19:14:31.020640 kernel: rtc-efi rtc-efi.0: setting system clock to 2024-02-12T19:14:30 UTC (1707765270) Feb 12 19:14:31.020650 kernel: efifb: probing for efifb Feb 12 19:14:31.020658 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Feb 12 19:14:31.020665 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Feb 12 19:14:31.020672 kernel: efifb: scrolling: redraw Feb 12 19:14:31.020687 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Feb 12 19:14:31.020694 kernel: Console: switching to colour frame buffer device 128x48 Feb 12 19:14:31.020701 kernel: fb0: EFI VGA frame buffer device Feb 12 19:14:31.020708 kernel: SMCCC: SOC_ID: ARCH_SOC_ID not implemented, skipping .... Feb 12 19:14:31.020715 kernel: hid: raw HID events driver (C) Jiri Kosina Feb 12 19:14:31.020722 kernel: NET: Registered PF_INET6 protocol family Feb 12 19:14:31.020728 kernel: Segment Routing with IPv6 Feb 12 19:14:31.020735 kernel: In-situ OAM (IOAM) with IPv6 Feb 12 19:14:31.020744 kernel: NET: Registered PF_PACKET protocol family Feb 12 19:14:31.020752 kernel: Key type dns_resolver registered Feb 12 19:14:31.020759 kernel: registered taskstats version 1 Feb 12 19:14:31.020766 kernel: Loading compiled-in X.509 certificates Feb 12 19:14:31.020773 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.148-flatcar: c8c3faa6fd8ae0112832fff0e3d0e58448a7eb6c' Feb 12 19:14:31.020780 kernel: Key type .fscrypt registered Feb 12 19:14:31.020787 kernel: Key type fscrypt-provisioning registered Feb 12 19:14:31.020793 kernel: ima: No TPM chip found, activating TPM-bypass! Feb 12 19:14:31.020800 kernel: ima: Allocated hash algorithm: sha1 Feb 12 19:14:31.020807 kernel: ima: No architecture policies found Feb 12 19:14:31.020818 kernel: Freeing unused kernel memory: 34688K Feb 12 19:14:31.020824 kernel: Run /init as init process Feb 12 19:14:31.020831 kernel: with arguments: Feb 12 19:14:31.020838 kernel: /init Feb 12 19:14:31.020844 kernel: with environment: Feb 12 19:14:31.020851 kernel: HOME=/ Feb 12 19:14:31.020857 kernel: TERM=linux Feb 12 19:14:31.020864 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Feb 12 19:14:31.020875 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Feb 12 19:14:31.020886 systemd[1]: Detected virtualization microsoft. Feb 12 19:14:31.020893 systemd[1]: Detected architecture arm64. Feb 12 19:14:31.020900 systemd[1]: Running in initrd. Feb 12 19:14:31.020907 systemd[1]: No hostname configured, using default hostname. Feb 12 19:14:31.020914 systemd[1]: Hostname set to . Feb 12 19:14:31.020925 systemd[1]: Initializing machine ID from random generator. Feb 12 19:14:31.020932 systemd[1]: Queued start job for default target initrd.target. Feb 12 19:14:31.020940 systemd[1]: Started systemd-ask-password-console.path. Feb 12 19:14:31.020947 systemd[1]: Reached target cryptsetup.target. Feb 12 19:14:31.020954 systemd[1]: Reached target paths.target. Feb 12 19:14:31.020961 systemd[1]: Reached target slices.target. Feb 12 19:14:31.020968 systemd[1]: Reached target swap.target. Feb 12 19:14:31.020978 systemd[1]: Reached target timers.target. Feb 12 19:14:31.020986 systemd[1]: Listening on iscsid.socket. Feb 12 19:14:31.020993 systemd[1]: Listening on iscsiuio.socket. Feb 12 19:14:31.021002 systemd[1]: Listening on systemd-journald-audit.socket. Feb 12 19:14:31.021009 systemd[1]: Listening on systemd-journald-dev-log.socket. Feb 12 19:14:31.021016 systemd[1]: Listening on systemd-journald.socket. Feb 12 19:14:31.021023 systemd[1]: Listening on systemd-networkd.socket. Feb 12 19:14:31.021033 systemd[1]: Listening on systemd-udevd-control.socket. Feb 12 19:14:31.021040 systemd[1]: Listening on systemd-udevd-kernel.socket. Feb 12 19:14:31.021047 systemd[1]: Reached target sockets.target. Feb 12 19:14:31.021055 systemd[1]: Starting kmod-static-nodes.service... Feb 12 19:14:31.021062 systemd[1]: Finished network-cleanup.service. Feb 12 19:14:31.021070 systemd[1]: Starting systemd-fsck-usr.service... Feb 12 19:14:31.021080 systemd[1]: Starting systemd-journald.service... Feb 12 19:14:31.021087 systemd[1]: Starting systemd-modules-load.service... Feb 12 19:14:31.021094 systemd[1]: Starting systemd-resolved.service... Feb 12 19:14:31.021101 systemd[1]: Starting systemd-vconsole-setup.service... Feb 12 19:14:31.021113 systemd-journald[276]: Journal started Feb 12 19:14:31.021162 systemd-journald[276]: Runtime Journal (/run/log/journal/aad14a69734c46098c0b2559eb6f70c0) is 8.0M, max 78.6M, 70.6M free. Feb 12 19:14:31.009276 systemd-modules-load[277]: Inserted module 'overlay' Feb 12 19:14:31.058451 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Feb 12 19:14:31.058473 kernel: Bridge firewalling registered Feb 12 19:14:31.058817 systemd-modules-load[277]: Inserted module 'br_netfilter' Feb 12 19:14:31.073431 systemd[1]: Started systemd-journald.service. Feb 12 19:14:31.078631 systemd-resolved[278]: Positive Trust Anchors: Feb 12 19:14:31.084000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:14:31.078648 systemd-resolved[278]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 12 19:14:31.180771 kernel: SCSI subsystem initialized Feb 12 19:14:31.180795 kernel: audit: type=1130 audit(1707765271.084:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:14:31.180805 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Feb 12 19:14:31.180815 kernel: audit: type=1130 audit(1707765271.115:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:14:31.180823 kernel: device-mapper: uevent: version 1.0.3 Feb 12 19:14:31.180831 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Feb 12 19:14:31.115000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:14:31.078677 systemd-resolved[278]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Feb 12 19:14:31.257153 kernel: audit: type=1130 audit(1707765271.185:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:14:31.185000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:14:31.080772 systemd-resolved[278]: Defaulting to hostname 'linux'. Feb 12 19:14:31.261000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:14:31.084801 systemd[1]: Started systemd-resolved.service. Feb 12 19:14:31.115666 systemd[1]: Finished kmod-static-nodes.service. Feb 12 19:14:31.366654 kernel: audit: type=1130 audit(1707765271.261:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:14:31.366685 kernel: audit: type=1130 audit(1707765271.292:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:14:31.366695 kernel: audit: type=1130 audit(1707765271.339:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:14:31.292000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:14:31.339000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:14:31.185555 systemd[1]: Finished systemd-fsck-usr.service. Feb 12 19:14:31.256607 systemd-modules-load[277]: Inserted module 'dm_multipath' Feb 12 19:14:31.283030 systemd[1]: Finished systemd-modules-load.service. Feb 12 19:14:31.293192 systemd[1]: Finished systemd-vconsole-setup.service. Feb 12 19:14:31.340177 systemd[1]: Reached target nss-lookup.target. Feb 12 19:14:31.399050 systemd[1]: Starting dracut-cmdline-ask.service... Feb 12 19:14:31.548196 kernel: audit: type=1130 audit(1707765271.461:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:14:31.548224 kernel: audit: type=1130 audit(1707765271.519:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:14:31.461000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:14:31.519000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:14:31.405287 systemd[1]: Starting systemd-sysctl.service... Feb 12 19:14:31.410670 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Feb 12 19:14:31.552000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:14:31.450276 systemd[1]: Finished dracut-cmdline-ask.service. Feb 12 19:14:31.585944 kernel: audit: type=1130 audit(1707765271.552:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:14:31.461684 systemd[1]: Finished systemd-sysctl.service. Feb 12 19:14:31.520156 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Feb 12 19:14:31.597485 dracut-cmdline[298]: dracut-dracut-053 Feb 12 19:14:31.574112 systemd[1]: Starting dracut-cmdline.service... Feb 12 19:14:31.606726 dracut-cmdline[298]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=0a07ee1673be713cb46dc1305004c8854c4690dc8835a87e3bc71aa6c6a62e40 Feb 12 19:14:31.699403 kernel: Loading iSCSI transport class v2.0-870. Feb 12 19:14:31.711401 kernel: iscsi: registered transport (tcp) Feb 12 19:14:31.730959 kernel: iscsi: registered transport (qla4xxx) Feb 12 19:14:31.730983 kernel: QLogic iSCSI HBA Driver Feb 12 19:14:31.767014 systemd[1]: Finished dracut-cmdline.service. Feb 12 19:14:31.771000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:14:31.772440 systemd[1]: Starting dracut-pre-udev.service... Feb 12 19:14:31.826397 kernel: raid6: neonx8 gen() 13820 MB/s Feb 12 19:14:31.848388 kernel: raid6: neonx8 xor() 10831 MB/s Feb 12 19:14:31.869579 kernel: raid6: neonx4 gen() 13568 MB/s Feb 12 19:14:31.890393 kernel: raid6: neonx4 xor() 11716 MB/s Feb 12 19:14:31.912405 kernel: raid6: neonx2 gen() 12924 MB/s Feb 12 19:14:31.934396 kernel: raid6: neonx2 xor() 10220 MB/s Feb 12 19:14:31.954387 kernel: raid6: neonx1 gen() 10514 MB/s Feb 12 19:14:31.975392 kernel: raid6: neonx1 xor() 8755 MB/s Feb 12 19:14:31.996386 kernel: raid6: int64x8 gen() 6297 MB/s Feb 12 19:14:32.016387 kernel: raid6: int64x8 xor() 3550 MB/s Feb 12 19:14:32.037391 kernel: raid6: int64x4 gen() 7246 MB/s Feb 12 19:14:32.057387 kernel: raid6: int64x4 xor() 3850 MB/s Feb 12 19:14:32.077386 kernel: raid6: int64x2 gen() 6153 MB/s Feb 12 19:14:32.098387 kernel: raid6: int64x2 xor() 3321 MB/s Feb 12 19:14:32.118386 kernel: raid6: int64x1 gen() 5046 MB/s Feb 12 19:14:32.142706 kernel: raid6: int64x1 xor() 2645 MB/s Feb 12 19:14:32.142733 kernel: raid6: using algorithm neonx8 gen() 13820 MB/s Feb 12 19:14:32.142741 kernel: raid6: .... xor() 10831 MB/s, rmw enabled Feb 12 19:14:32.148193 kernel: raid6: using neon recovery algorithm Feb 12 19:14:32.165390 kernel: xor: measuring software checksum speed Feb 12 19:14:32.173778 kernel: 8regs : 17289 MB/sec Feb 12 19:14:32.173787 kernel: 32regs : 20749 MB/sec Feb 12 19:14:32.179032 kernel: arm64_neon : 27892 MB/sec Feb 12 19:14:32.179042 kernel: xor: using function: arm64_neon (27892 MB/sec) Feb 12 19:14:32.240398 kernel: Btrfs loaded, crc32c=crc32c-generic, zoned=no, fsverity=no Feb 12 19:14:32.249819 systemd[1]: Finished dracut-pre-udev.service. Feb 12 19:14:32.254000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:14:32.258000 audit: BPF prog-id=7 op=LOAD Feb 12 19:14:32.258000 audit: BPF prog-id=8 op=LOAD Feb 12 19:14:32.259136 systemd[1]: Starting systemd-udevd.service... Feb 12 19:14:32.273844 systemd-udevd[475]: Using default interface naming scheme 'v252'. Feb 12 19:14:32.279174 systemd[1]: Started systemd-udevd.service. Feb 12 19:14:32.289000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:14:32.290567 systemd[1]: Starting dracut-pre-trigger.service... Feb 12 19:14:32.307252 dracut-pre-trigger[496]: rd.md=0: removing MD RAID activation Feb 12 19:14:32.335396 systemd[1]: Finished dracut-pre-trigger.service. Feb 12 19:14:32.339000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:14:32.340925 systemd[1]: Starting systemd-udev-trigger.service... Feb 12 19:14:32.380660 systemd[1]: Finished systemd-udev-trigger.service. Feb 12 19:14:32.393000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:14:32.432399 kernel: hv_vmbus: Vmbus version:5.3 Feb 12 19:14:32.440400 kernel: hv_vmbus: registering driver hyperv_keyboard Feb 12 19:14:32.459404 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/VMBUS:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 Feb 12 19:14:32.470529 kernel: hv_vmbus: registering driver hid_hyperv Feb 12 19:14:32.470588 kernel: hv_vmbus: registering driver hv_netvsc Feb 12 19:14:32.478499 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 Feb 12 19:14:32.487265 kernel: hid-generic 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Feb 12 19:14:32.495397 kernel: hv_vmbus: registering driver hv_storvsc Feb 12 19:14:32.503306 kernel: scsi host0: storvsc_host_t Feb 12 19:14:32.503510 kernel: scsi host1: storvsc_host_t Feb 12 19:14:32.503533 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Feb 12 19:14:32.511434 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 0 Feb 12 19:14:32.535925 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Feb 12 19:14:32.536143 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Feb 12 19:14:32.543299 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Feb 12 19:14:32.547746 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Feb 12 19:14:32.552079 kernel: sd 0:0:0:0: [sda] Write Protect is off Feb 12 19:14:32.553334 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Feb 12 19:14:32.553460 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Feb 12 19:14:32.553541 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Feb 12 19:14:32.567547 kernel: hv_netvsc 0022487e-2935-0022-487e-29350022487e eth0: VF slot 1 added Feb 12 19:14:32.576398 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 12 19:14:32.581405 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Feb 12 19:14:32.588415 kernel: hv_vmbus: registering driver hv_pci Feb 12 19:14:32.599890 kernel: hv_pci 85635863-c857-497f-ae82-fb75d3bc668c: PCI VMBus probing: Using version 0x10004 Feb 12 19:14:32.613107 kernel: hv_pci 85635863-c857-497f-ae82-fb75d3bc668c: PCI host bridge to bus c857:00 Feb 12 19:14:32.613282 kernel: pci_bus c857:00: root bus resource [mem 0xfc0000000-0xfc00fffff window] Feb 12 19:14:32.620778 kernel: pci_bus c857:00: No busn resource found for root bus, will use [bus 00-ff] Feb 12 19:14:32.628992 kernel: pci c857:00:02.0: [15b3:1018] type 00 class 0x020000 Feb 12 19:14:32.642532 kernel: pci c857:00:02.0: reg 0x10: [mem 0xfc0000000-0xfc00fffff 64bit pref] Feb 12 19:14:32.669770 kernel: pci c857:00:02.0: enabling Extended Tags Feb 12 19:14:32.694464 kernel: pci c857:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at c857:00:02.0 (capable of 126.016 Gb/s with 8.0 GT/s PCIe x16 link) Feb 12 19:14:32.708383 kernel: pci_bus c857:00: busn_res: [bus 00-ff] end is updated to 00 Feb 12 19:14:32.708575 kernel: pci c857:00:02.0: BAR 0: assigned [mem 0xfc0000000-0xfc00fffff 64bit pref] Feb 12 19:14:32.749405 kernel: mlx5_core c857:00:02.0: firmware version: 16.30.1284 Feb 12 19:14:32.910400 kernel: mlx5_core c857:00:02.0: MLX5E: StrdRq(1) RqSz(8) StrdSz(2048) RxCqeCmprss(0) Feb 12 19:14:32.971966 kernel: hv_netvsc 0022487e-2935-0022-487e-29350022487e eth0: VF registering: eth1 Feb 12 19:14:32.972141 kernel: mlx5_core c857:00:02.0 eth1: joined to eth0 Feb 12 19:14:32.985400 kernel: mlx5_core c857:00:02.0 enP51287s1: renamed from eth1 Feb 12 19:14:33.111769 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Feb 12 19:14:33.160408 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by (udev-worker) (541) Feb 12 19:14:33.173553 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Feb 12 19:14:33.330690 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Feb 12 19:14:33.337259 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Feb 12 19:14:33.356644 systemd[1]: Starting disk-uuid.service... Feb 12 19:14:33.445434 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Feb 12 19:14:34.381969 disk-uuid[596]: The operation has completed successfully. Feb 12 19:14:34.391305 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 12 19:14:34.445501 systemd[1]: disk-uuid.service: Deactivated successfully. Feb 12 19:14:34.450000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:14:34.450000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:14:34.445593 systemd[1]: Finished disk-uuid.service. Feb 12 19:14:34.459024 systemd[1]: Starting verity-setup.service... Feb 12 19:14:34.531405 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Feb 12 19:14:34.754470 systemd[1]: Found device dev-mapper-usr.device. Feb 12 19:14:34.760307 systemd[1]: Mounting sysusr-usr.mount... Feb 12 19:14:34.768564 systemd[1]: Finished verity-setup.service. Feb 12 19:14:34.776000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:14:34.827512 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Feb 12 19:14:34.827930 systemd[1]: Mounted sysusr-usr.mount. Feb 12 19:14:34.832274 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Feb 12 19:14:34.833081 systemd[1]: Starting ignition-setup.service... Feb 12 19:14:34.841269 systemd[1]: Starting parse-ip-for-networkd.service... Feb 12 19:14:34.880952 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Feb 12 19:14:34.881014 kernel: BTRFS info (device sda6): using free space tree Feb 12 19:14:34.885717 kernel: BTRFS info (device sda6): has skinny extents Feb 12 19:14:34.934348 systemd[1]: Finished parse-ip-for-networkd.service. Feb 12 19:14:34.939000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:14:34.943000 audit: BPF prog-id=9 op=LOAD Feb 12 19:14:34.944228 systemd[1]: Starting systemd-networkd.service... Feb 12 19:14:34.970471 systemd-networkd[867]: lo: Link UP Feb 12 19:14:34.970484 systemd-networkd[867]: lo: Gained carrier Feb 12 19:14:34.978000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:14:34.970879 systemd-networkd[867]: Enumeration completed Feb 12 19:14:34.974160 systemd[1]: Started systemd-networkd.service. Feb 12 19:14:34.974666 systemd-networkd[867]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 12 19:14:35.036567 kernel: kauditd_printk_skb: 13 callbacks suppressed Feb 12 19:14:35.036598 kernel: audit: type=1130 audit(1707765275.012:24): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:14:35.012000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:14:34.979512 systemd[1]: Reached target network.target. Feb 12 19:14:34.989308 systemd[1]: Starting iscsiuio.service... Feb 12 19:14:35.048468 iscsid[875]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Feb 12 19:14:35.048468 iscsid[875]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log Feb 12 19:14:35.048468 iscsid[875]: into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Feb 12 19:14:35.048468 iscsid[875]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Feb 12 19:14:35.048468 iscsid[875]: If using hardware iscsi like qla4xxx this message can be ignored. Feb 12 19:14:35.048468 iscsid[875]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Feb 12 19:14:35.048468 iscsid[875]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Feb 12 19:14:35.199168 kernel: audit: type=1130 audit(1707765275.051:25): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:14:35.199191 kernel: mlx5_core c857:00:02.0 enP51287s1: Link up Feb 12 19:14:35.199372 kernel: audit: type=1130 audit(1707765275.124:26): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:14:35.199402 kernel: hv_netvsc 0022487e-2935-0022-487e-29350022487e eth0: Data path switched to VF: enP51287s1 Feb 12 19:14:35.199490 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Feb 12 19:14:35.051000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:14:35.124000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:14:35.008520 systemd[1]: Started iscsiuio.service. Feb 12 19:14:35.015323 systemd[1]: Starting iscsid.service... Feb 12 19:14:35.045988 systemd[1]: Started iscsid.service. Feb 12 19:14:35.052728 systemd[1]: mnt-oem.mount: Deactivated successfully. Feb 12 19:14:35.053813 systemd[1]: Starting dracut-initqueue.service... Feb 12 19:14:35.227000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:14:35.114567 systemd[1]: Finished dracut-initqueue.service. Feb 12 19:14:35.253355 kernel: audit: type=1130 audit(1707765275.227:27): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:14:35.124603 systemd[1]: Reached target remote-fs-pre.target. Feb 12 19:14:35.151937 systemd[1]: Reached target remote-cryptsetup.target. Feb 12 19:14:35.178355 systemd[1]: Reached target remote-fs.target. Feb 12 19:14:35.185713 systemd-networkd[867]: enP51287s1: Link UP Feb 12 19:14:35.272000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:14:35.186058 systemd-networkd[867]: eth0: Link UP Feb 12 19:14:35.186704 systemd-networkd[867]: eth0: Gained carrier Feb 12 19:14:35.302176 kernel: audit: type=1130 audit(1707765275.272:28): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:14:35.198583 systemd[1]: Starting dracut-pre-mount.service... Feb 12 19:14:35.201614 systemd-networkd[867]: enP51287s1: Gained carrier Feb 12 19:14:35.213475 systemd-networkd[867]: eth0: DHCPv4 address 10.200.20.31/24, gateway 10.200.20.1 acquired from 168.63.129.16 Feb 12 19:14:35.217724 systemd[1]: Finished dracut-pre-mount.service. Feb 12 19:14:35.265728 systemd[1]: Finished ignition-setup.service. Feb 12 19:14:35.297588 systemd[1]: Starting ignition-fetch-offline.service... Feb 12 19:14:36.910493 systemd-networkd[867]: eth0: Gained IPv6LL Feb 12 19:14:38.147370 ignition[894]: Ignition 2.14.0 Feb 12 19:14:38.148413 ignition[894]: Stage: fetch-offline Feb 12 19:14:38.148496 ignition[894]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 12 19:14:38.148523 ignition[894]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Feb 12 19:14:38.278739 ignition[894]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 12 19:14:38.278889 ignition[894]: parsed url from cmdline: "" Feb 12 19:14:38.278894 ignition[894]: no config URL provided Feb 12 19:14:38.278899 ignition[894]: reading system config file "/usr/lib/ignition/user.ign" Feb 12 19:14:38.295000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:14:38.289711 systemd[1]: Finished ignition-fetch-offline.service. Feb 12 19:14:38.324257 kernel: audit: type=1130 audit(1707765278.295:29): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:14:38.278907 ignition[894]: no config at "/usr/lib/ignition/user.ign" Feb 12 19:14:38.318357 systemd[1]: Starting ignition-fetch.service... Feb 12 19:14:38.278913 ignition[894]: failed to fetch config: resource requires networking Feb 12 19:14:38.279143 ignition[894]: Ignition finished successfully Feb 12 19:14:38.329835 ignition[900]: Ignition 2.14.0 Feb 12 19:14:38.329841 ignition[900]: Stage: fetch Feb 12 19:14:38.329953 ignition[900]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 12 19:14:38.329973 ignition[900]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Feb 12 19:14:38.332767 ignition[900]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 12 19:14:38.335020 ignition[900]: parsed url from cmdline: "" Feb 12 19:14:38.335029 ignition[900]: no config URL provided Feb 12 19:14:38.335037 ignition[900]: reading system config file "/usr/lib/ignition/user.ign" Feb 12 19:14:38.335059 ignition[900]: no config at "/usr/lib/ignition/user.ign" Feb 12 19:14:38.335097 ignition[900]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Feb 12 19:14:38.361390 ignition[900]: GET result: OK Feb 12 19:14:38.361564 ignition[900]: config has been read from IMDS userdata Feb 12 19:14:38.361632 ignition[900]: parsing config with SHA512: 59221ccfe4e13f52125657f57e495e875da5af18e84398947b202a3c47cf4e7cffd4b71f90663352a131421e12a6c263ad0cf9be85a49f07c8cc2304f1fdecb3 Feb 12 19:14:38.408916 unknown[900]: fetched base config from "system" Feb 12 19:14:38.408933 unknown[900]: fetched base config from "system" Feb 12 19:14:38.418000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:14:38.409622 ignition[900]: fetch: fetch complete Feb 12 19:14:38.444877 kernel: audit: type=1130 audit(1707765278.418:30): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:14:38.408939 unknown[900]: fetched user config from "azure" Feb 12 19:14:38.409628 ignition[900]: fetch: fetch passed Feb 12 19:14:38.414337 systemd[1]: Finished ignition-fetch.service. Feb 12 19:14:38.409673 ignition[900]: Ignition finished successfully Feb 12 19:14:38.419810 systemd[1]: Starting ignition-kargs.service... Feb 12 19:14:38.471000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:14:38.452646 ignition[906]: Ignition 2.14.0 Feb 12 19:14:38.500655 kernel: audit: type=1130 audit(1707765278.471:31): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:14:38.462923 systemd[1]: Finished ignition-kargs.service. Feb 12 19:14:38.452652 ignition[906]: Stage: kargs Feb 12 19:14:38.534450 kernel: audit: type=1130 audit(1707765278.512:32): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:14:38.512000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:14:38.493065 systemd[1]: Starting ignition-disks.service... Feb 12 19:14:38.452764 ignition[906]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 12 19:14:38.507969 systemd[1]: Finished ignition-disks.service. Feb 12 19:14:38.452795 ignition[906]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Feb 12 19:14:38.512997 systemd[1]: Reached target initrd-root-device.target. Feb 12 19:14:38.455698 ignition[906]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 12 19:14:38.539599 systemd[1]: Reached target local-fs-pre.target. Feb 12 19:14:38.459718 ignition[906]: kargs: kargs passed Feb 12 19:14:38.548425 systemd[1]: Reached target local-fs.target. Feb 12 19:14:38.459772 ignition[906]: Ignition finished successfully Feb 12 19:14:38.559316 systemd[1]: Reached target sysinit.target. Feb 12 19:14:38.499763 ignition[912]: Ignition 2.14.0 Feb 12 19:14:38.567930 systemd[1]: Reached target basic.target. Feb 12 19:14:38.499769 ignition[912]: Stage: disks Feb 12 19:14:38.578536 systemd[1]: Starting systemd-fsck-root.service... Feb 12 19:14:38.499879 ignition[912]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 12 19:14:38.499897 ignition[912]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Feb 12 19:14:38.504464 ignition[912]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 12 19:14:38.506296 ignition[912]: disks: disks passed Feb 12 19:14:38.506351 ignition[912]: Ignition finished successfully Feb 12 19:14:38.649343 systemd-fsck[920]: ROOT: clean, 602/7326000 files, 481070/7359488 blocks Feb 12 19:14:38.661563 systemd[1]: Finished systemd-fsck-root.service. Feb 12 19:14:38.666000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:14:38.690220 systemd[1]: Mounting sysroot.mount... Feb 12 19:14:38.698873 kernel: audit: type=1130 audit(1707765278.666:33): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:14:38.716395 kernel: EXT4-fs (sda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Feb 12 19:14:38.716602 systemd[1]: Mounted sysroot.mount. Feb 12 19:14:38.725881 systemd[1]: Reached target initrd-root-fs.target. Feb 12 19:14:38.761635 systemd[1]: Mounting sysroot-usr.mount... Feb 12 19:14:38.766397 systemd[1]: Starting flatcar-metadata-hostname.service... Feb 12 19:14:38.774146 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Feb 12 19:14:38.774180 systemd[1]: Reached target ignition-diskful.target. Feb 12 19:14:38.780314 systemd[1]: Mounted sysroot-usr.mount. Feb 12 19:14:38.827405 systemd[1]: Mounting sysroot-usr-share-oem.mount... Feb 12 19:14:38.832817 systemd[1]: Starting initrd-setup-root.service... Feb 12 19:14:38.858217 initrd-setup-root[936]: cut: /sysroot/etc/passwd: No such file or directory Feb 12 19:14:38.873501 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by mount (931) Feb 12 19:14:38.873522 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Feb 12 19:14:38.878803 kernel: BTRFS info (device sda6): using free space tree Feb 12 19:14:38.883864 kernel: BTRFS info (device sda6): has skinny extents Feb 12 19:14:38.889001 initrd-setup-root[962]: cut: /sysroot/etc/group: No such file or directory Feb 12 19:14:38.898959 systemd[1]: Mounted sysroot-usr-share-oem.mount. Feb 12 19:14:38.910433 initrd-setup-root[970]: cut: /sysroot/etc/shadow: No such file or directory Feb 12 19:14:38.919011 initrd-setup-root[978]: cut: /sysroot/etc/gshadow: No such file or directory Feb 12 19:14:39.399394 systemd[1]: Finished initrd-setup-root.service. Feb 12 19:14:39.404000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:14:39.405990 systemd[1]: Starting ignition-mount.service... Feb 12 19:14:39.421052 systemd[1]: Starting sysroot-boot.service... Feb 12 19:14:39.430851 systemd[1]: sysusr-usr-share-oem.mount: Deactivated successfully. Feb 12 19:14:39.430967 systemd[1]: sysroot-usr-share-oem.mount: Deactivated successfully. Feb 12 19:14:39.460860 ignition[998]: INFO : Ignition 2.14.0 Feb 12 19:14:39.460860 ignition[998]: INFO : Stage: mount Feb 12 19:14:39.470994 ignition[998]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 12 19:14:39.470994 ignition[998]: DEBUG : parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Feb 12 19:14:39.470994 ignition[998]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 12 19:14:39.470994 ignition[998]: INFO : mount: mount passed Feb 12 19:14:39.470994 ignition[998]: INFO : Ignition finished successfully Feb 12 19:14:39.498000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:14:39.510000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:14:39.475400 systemd[1]: Finished ignition-mount.service. Feb 12 19:14:39.500361 systemd[1]: Finished sysroot-boot.service. Feb 12 19:14:39.953250 coreos-metadata[930]: Feb 12 19:14:39.953 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Feb 12 19:14:39.962742 coreos-metadata[930]: Feb 12 19:14:39.962 INFO Fetch successful Feb 12 19:14:39.990007 coreos-metadata[930]: Feb 12 19:14:39.989 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Feb 12 19:14:40.017393 coreos-metadata[930]: Feb 12 19:14:40.017 INFO Fetch successful Feb 12 19:14:40.023559 coreos-metadata[930]: Feb 12 19:14:40.023 INFO wrote hostname ci-3510.3.2-a-e08ac1c56f to /sysroot/etc/hostname Feb 12 19:14:40.024114 systemd[1]: Finished flatcar-metadata-hostname.service. Feb 12 19:14:40.037000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:14:40.038815 systemd[1]: Starting ignition-files.service... Feb 12 19:14:40.078049 kernel: kauditd_printk_skb: 3 callbacks suppressed Feb 12 19:14:40.078094 kernel: audit: type=1130 audit(1707765280.037:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:14:40.079007 systemd[1]: Mounting sysroot-usr-share-oem.mount... Feb 12 19:14:40.109204 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by mount (1009) Feb 12 19:14:40.109256 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Feb 12 19:14:40.114570 kernel: BTRFS info (device sda6): using free space tree Feb 12 19:14:40.119525 kernel: BTRFS info (device sda6): has skinny extents Feb 12 19:14:40.124717 systemd[1]: Mounted sysroot-usr-share-oem.mount. Feb 12 19:14:40.141874 ignition[1028]: INFO : Ignition 2.14.0 Feb 12 19:14:40.141874 ignition[1028]: INFO : Stage: files Feb 12 19:14:40.154540 ignition[1028]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 12 19:14:40.154540 ignition[1028]: DEBUG : parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Feb 12 19:14:40.154540 ignition[1028]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 12 19:14:40.154540 ignition[1028]: DEBUG : files: compiled without relabeling support, skipping Feb 12 19:14:40.154540 ignition[1028]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Feb 12 19:14:40.154540 ignition[1028]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Feb 12 19:14:40.228391 ignition[1028]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Feb 12 19:14:40.237320 ignition[1028]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Feb 12 19:14:40.237320 ignition[1028]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Feb 12 19:14:40.236889 unknown[1028]: wrote ssh authorized keys file for user: core Feb 12 19:14:40.259549 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/crictl-v1.26.0-linux-arm64.tar.gz" Feb 12 19:14:40.259549 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.26.0/crictl-v1.26.0-linux-arm64.tar.gz: attempt #1 Feb 12 19:14:40.763482 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Feb 12 19:14:41.079327 ignition[1028]: DEBUG : files: createFilesystemsFiles: createFiles: op(3): file matches expected sum of: 4c7e4541123cbd6f1d6fec1f827395cd58d65716c0998de790f965485738b6d6257c0dc46fd7f66403166c299f6d5bf9ff30b6e1ff9afbb071f17005e834518c Feb 12 19:14:41.096563 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/crictl-v1.26.0-linux-arm64.tar.gz" Feb 12 19:14:41.096563 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Feb 12 19:14:41.119684 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Feb 12 19:14:41.452597 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Feb 12 19:14:41.821390 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Feb 12 19:14:41.832089 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/opt/cni-plugins-linux-arm64-v1.1.1.tgz" Feb 12 19:14:41.832089 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET https://github.com/containernetworking/plugins/releases/download/v1.1.1/cni-plugins-linux-arm64-v1.1.1.tgz: attempt #1 Feb 12 19:14:42.245178 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET result: OK Feb 12 19:14:42.693179 ignition[1028]: DEBUG : files: createFilesystemsFiles: createFiles: op(5): file matches expected sum of: 6b5df61a53601926e4b5a9174828123d555f592165439f541bc117c68781f41c8bd30dccd52367e406d104df849bcbcfb72d9c4bafda4b045c59ce95d0ca0742 Feb 12 19:14:42.710074 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/opt/cni-plugins-linux-arm64-v1.1.1.tgz" Feb 12 19:14:42.710074 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/bin/kubectl" Feb 12 19:14:42.710074 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET https://dl.k8s.io/release/v1.26.5/bin/linux/arm64/kubectl: attempt #1 Feb 12 19:14:43.102435 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET result: OK Feb 12 19:14:49.316439 ignition[1028]: DEBUG : files: createFilesystemsFiles: createFiles: op(6): file matches expected sum of: 3672fda0beebbbd636a2088f427463cbad32683ea4fbb1df61650552e63846b6a47db803ccb70c3db0a8f24746a23a5632bdc15a3fb78f4f7d833e7f86763c2a Feb 12 19:14:49.334299 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/bin/kubectl" Feb 12 19:14:49.334299 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/opt/bin/kubeadm" Feb 12 19:14:49.334299 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(7): GET https://dl.k8s.io/release/v1.26.5/bin/linux/arm64/kubeadm: attempt #1 Feb 12 19:14:49.485668 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(7): GET result: OK Feb 12 19:14:55.465432 ignition[1028]: DEBUG : files: createFilesystemsFiles: createFiles: op(7): file matches expected sum of: 46c9f489062bdb84574703f7339d140d7e42c9c71b367cd860071108a3c1d38fabda2ef69f9c0ff88f7c80e88d38f96ab2248d4c9a6c9c60b0a4c20fd640d0db Feb 12 19:14:55.485809 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/opt/bin/kubeadm" Feb 12 19:14:55.485809 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/opt/bin/kubelet" Feb 12 19:14:55.485809 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(8): GET https://dl.k8s.io/release/v1.26.5/bin/linux/arm64/kubelet: attempt #1 Feb 12 19:14:55.675444 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(8): GET result: OK Feb 12 19:15:10.765995 ignition[1028]: DEBUG : files: createFilesystemsFiles: createFiles: op(8): file matches expected sum of: 0e4ee1f23bf768c49d09beb13a6b5fad6efc8e3e685e7c5610188763e3af55923fb46158b5e76973a0f9a055f9b30d525b467c53415f965536adc2f04d9cf18d Feb 12 19:15:10.786785 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/opt/bin/kubelet" Feb 12 19:15:10.786785 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/docker/daemon.json" Feb 12 19:15:10.786785 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/docker/daemon.json" Feb 12 19:15:10.786785 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Feb 12 19:15:10.843309 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Feb 12 19:15:11.207132 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Feb 12 19:15:11.620528 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Feb 12 19:15:11.632117 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/home/core/install.sh" Feb 12 19:15:11.632117 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/home/core/install.sh" Feb 12 19:15:11.632117 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(c): [started] writing file "/sysroot/home/core/nginx.yaml" Feb 12 19:15:11.632117 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/home/core/nginx.yaml" Feb 12 19:15:11.632117 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(d): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 12 19:15:11.632117 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(d): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 12 19:15:11.632117 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(e): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 12 19:15:11.632117 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(e): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 12 19:15:11.632117 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(f): [started] writing file "/sysroot/etc/flatcar/update.conf" Feb 12 19:15:11.632117 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(f): [finished] writing file "/sysroot/etc/flatcar/update.conf" Feb 12 19:15:11.790781 kernel: BTRFS info: devid 1 device path /dev/sda6 changed to /dev/disk/by-label/OEM scanned by ignition (1033) Feb 12 19:15:11.790804 kernel: audit: type=1130 audit(1707765311.735:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:15:11.735000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:15:11.790858 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(10): [started] writing file "/sysroot/etc/systemd/system/waagent.service" Feb 12 19:15:11.790858 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(10): oem config not found in "/usr/share/oem", looking on oem partition Feb 12 19:15:11.790858 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(10): op(11): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1032111908" Feb 12 19:15:11.790858 ignition[1028]: CRITICAL : files: createFilesystemsFiles: createFiles: op(10): op(11): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1032111908": device or resource busy Feb 12 19:15:11.790858 ignition[1028]: ERROR : files: createFilesystemsFiles: createFiles: op(10): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem1032111908", trying btrfs: device or resource busy Feb 12 19:15:11.790858 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(10): op(12): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1032111908" Feb 12 19:15:11.790858 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(10): op(12): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1032111908" Feb 12 19:15:11.790858 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(10): op(13): [started] unmounting "/mnt/oem1032111908" Feb 12 19:15:11.790858 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(10): op(13): [finished] unmounting "/mnt/oem1032111908" Feb 12 19:15:11.790858 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(10): [finished] writing file "/sysroot/etc/systemd/system/waagent.service" Feb 12 19:15:11.790858 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(14): [started] writing file "/sysroot/etc/systemd/system/nvidia.service" Feb 12 19:15:11.790858 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(14): oem config not found in "/usr/share/oem", looking on oem partition Feb 12 19:15:11.790858 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(14): op(15): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2214902135" Feb 12 19:15:11.790858 ignition[1028]: CRITICAL : files: createFilesystemsFiles: createFiles: op(14): op(15): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2214902135": device or resource busy Feb 12 19:15:12.162454 kernel: audit: type=1130 audit(1707765311.848:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:15:12.162490 kernel: audit: type=1130 audit(1707765311.912:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:15:12.162501 kernel: audit: type=1131 audit(1707765311.912:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:15:12.162511 kernel: audit: type=1130 audit(1707765312.029:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:15:12.162520 kernel: audit: type=1131 audit(1707765312.029:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:15:11.848000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:15:11.912000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:15:11.912000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:15:12.029000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:15:12.029000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:15:11.678147 systemd[1]: mnt-oem1032111908.mount: Deactivated successfully. Feb 12 19:15:12.162000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:15:12.186285 ignition[1028]: ERROR : files: createFilesystemsFiles: createFiles: op(14): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem2214902135", trying btrfs: device or resource busy Feb 12 19:15:12.186285 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(14): op(16): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2214902135" Feb 12 19:15:12.186285 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(14): op(16): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2214902135" Feb 12 19:15:12.186285 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(14): op(17): [started] unmounting "/mnt/oem2214902135" Feb 12 19:15:12.186285 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(14): op(17): [finished] unmounting "/mnt/oem2214902135" Feb 12 19:15:12.186285 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(14): [finished] writing file "/sysroot/etc/systemd/system/nvidia.service" Feb 12 19:15:12.186285 ignition[1028]: INFO : files: op(18): [started] processing unit "waagent.service" Feb 12 19:15:12.186285 ignition[1028]: INFO : files: op(18): [finished] processing unit "waagent.service" Feb 12 19:15:12.186285 ignition[1028]: INFO : files: op(19): [started] processing unit "nvidia.service" Feb 12 19:15:12.186285 ignition[1028]: INFO : files: op(19): [finished] processing unit "nvidia.service" Feb 12 19:15:12.186285 ignition[1028]: INFO : files: op(1a): [started] processing unit "prepare-cni-plugins.service" Feb 12 19:15:12.186285 ignition[1028]: INFO : files: op(1a): op(1b): [started] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Feb 12 19:15:12.186285 ignition[1028]: INFO : files: op(1a): op(1b): [finished] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Feb 12 19:15:12.186285 ignition[1028]: INFO : files: op(1a): [finished] processing unit "prepare-cni-plugins.service" Feb 12 19:15:12.186285 ignition[1028]: INFO : files: op(1c): [started] processing unit "prepare-critools.service" Feb 12 19:15:12.186285 ignition[1028]: INFO : files: op(1c): op(1d): [started] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Feb 12 19:15:12.186285 ignition[1028]: INFO : files: op(1c): op(1d): [finished] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Feb 12 19:15:12.186285 ignition[1028]: INFO : files: op(1c): [finished] processing unit "prepare-critools.service" Feb 12 19:15:12.490213 kernel: audit: type=1130 audit(1707765312.162:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:15:12.490244 kernel: audit: type=1131 audit(1707765312.286:45): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:15:12.286000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:15:11.708736 systemd[1]: mnt-oem2214902135.mount: Deactivated successfully. Feb 12 19:15:12.503414 ignition[1028]: INFO : files: op(1e): [started] processing unit "prepare-helm.service" Feb 12 19:15:12.503414 ignition[1028]: INFO : files: op(1e): op(1f): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 12 19:15:12.503414 ignition[1028]: INFO : files: op(1e): op(1f): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 12 19:15:12.503414 ignition[1028]: INFO : files: op(1e): [finished] processing unit "prepare-helm.service" Feb 12 19:15:12.503414 ignition[1028]: INFO : files: op(20): [started] setting preset to enabled for "prepare-helm.service" Feb 12 19:15:12.503414 ignition[1028]: INFO : files: op(20): [finished] setting preset to enabled for "prepare-helm.service" Feb 12 19:15:12.503414 ignition[1028]: INFO : files: op(21): [started] setting preset to enabled for "waagent.service" Feb 12 19:15:12.503414 ignition[1028]: INFO : files: op(21): [finished] setting preset to enabled for "waagent.service" Feb 12 19:15:12.503414 ignition[1028]: INFO : files: op(22): [started] setting preset to enabled for "nvidia.service" Feb 12 19:15:12.503414 ignition[1028]: INFO : files: op(22): [finished] setting preset to enabled for "nvidia.service" Feb 12 19:15:12.503414 ignition[1028]: INFO : files: op(23): [started] setting preset to enabled for "prepare-cni-plugins.service" Feb 12 19:15:12.503414 ignition[1028]: INFO : files: op(23): [finished] setting preset to enabled for "prepare-cni-plugins.service" Feb 12 19:15:12.503414 ignition[1028]: INFO : files: op(24): [started] setting preset to enabled for "prepare-critools.service" Feb 12 19:15:12.503414 ignition[1028]: INFO : files: op(24): [finished] setting preset to enabled for "prepare-critools.service" Feb 12 19:15:12.503414 ignition[1028]: INFO : files: createResultFile: createFiles: op(25): [started] writing file "/sysroot/etc/.ignition-result.json" Feb 12 19:15:12.503414 ignition[1028]: INFO : files: createResultFile: createFiles: op(25): [finished] writing file "/sysroot/etc/.ignition-result.json" Feb 12 19:15:12.503414 ignition[1028]: INFO : files: files passed Feb 12 19:15:12.503414 ignition[1028]: INFO : Ignition finished successfully Feb 12 19:15:12.856510 kernel: audit: type=1131 audit(1707765312.509:46): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:15:12.856540 kernel: audit: type=1131 audit(1707765312.564:47): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:15:12.509000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:15:12.564000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:15:12.632000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:15:12.638000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:15:12.649000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:15:12.729000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:15:12.753000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:15:12.772000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:15:12.797000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:15:12.797000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:15:12.827000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:15:12.850000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:15:12.856809 initrd-setup-root-after-ignition[1053]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 12 19:15:12.861000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:15:11.719668 systemd[1]: Finished ignition-files.service. Feb 12 19:15:12.883893 iscsid[875]: iscsid shutting down. Feb 12 19:15:12.888000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:15:12.893000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:15:11.778018 systemd[1]: Starting initrd-setup-root-after-ignition.service... Feb 12 19:15:11.796539 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Feb 12 19:15:11.808959 systemd[1]: Starting ignition-quench.service... Feb 12 19:15:11.828279 systemd[1]: Finished initrd-setup-root-after-ignition.service. Feb 12 19:15:11.889907 systemd[1]: ignition-quench.service: Deactivated successfully. Feb 12 19:15:11.890000 systemd[1]: Finished ignition-quench.service. Feb 12 19:15:12.953694 ignition[1066]: INFO : Ignition 2.14.0 Feb 12 19:15:12.953694 ignition[1066]: INFO : Stage: umount Feb 12 19:15:12.953694 ignition[1066]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 12 19:15:12.953694 ignition[1066]: DEBUG : parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Feb 12 19:15:12.953694 ignition[1066]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 12 19:15:12.953694 ignition[1066]: INFO : umount: umount passed Feb 12 19:15:12.953694 ignition[1066]: INFO : Ignition finished successfully Feb 12 19:15:12.977000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:15:13.017000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:15:11.912783 systemd[1]: Reached target ignition-complete.target. Feb 12 19:15:11.987328 systemd[1]: Starting initrd-parse-etc.service... Feb 12 19:15:12.024006 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Feb 12 19:15:13.218000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:15:12.024098 systemd[1]: Finished initrd-parse-etc.service. Feb 12 19:15:13.232000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:15:12.030078 systemd[1]: Reached target initrd-fs.target. Feb 12 19:15:13.246000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:15:13.246000 audit: BPF prog-id=6 op=UNLOAD Feb 12 19:15:12.074176 systemd[1]: Reached target initrd.target. Feb 12 19:15:12.080519 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Feb 12 19:15:13.293000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:15:12.092285 systemd[1]: Starting dracut-pre-pivot.service... Feb 12 19:15:12.157659 systemd[1]: Finished dracut-pre-pivot.service. Feb 12 19:15:13.332000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:15:12.203760 systemd[1]: Starting initrd-cleanup.service... Feb 12 19:15:13.342000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:15:12.230318 systemd[1]: Stopped target nss-lookup.target. Feb 12 19:15:13.353000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:15:12.239026 systemd[1]: Stopped target remote-cryptsetup.target. Feb 12 19:15:12.254331 systemd[1]: Stopped target timers.target. Feb 12 19:15:12.269923 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Feb 12 19:15:13.378000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:15:12.269988 systemd[1]: Stopped dracut-pre-pivot.service. Feb 12 19:15:12.313918 systemd[1]: Stopped target initrd.target. Feb 12 19:15:12.327238 systemd[1]: Stopped target basic.target. Feb 12 19:15:13.429000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:15:12.342781 systemd[1]: Stopped target ignition-complete.target. Feb 12 19:15:13.446204 kernel: hv_netvsc 0022487e-2935-0022-487e-29350022487e eth0: Data path switched from VF: enP51287s1 Feb 12 19:15:13.438000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:15:12.359221 systemd[1]: Stopped target ignition-diskful.target. Feb 12 19:15:13.452000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:15:12.378190 systemd[1]: Stopped target initrd-root-device.target. Feb 12 19:15:12.397417 systemd[1]: Stopped target remote-fs.target. Feb 12 19:15:12.415108 systemd[1]: Stopped target remote-fs-pre.target. Feb 12 19:15:13.481000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:15:12.428701 systemd[1]: Stopped target sysinit.target. Feb 12 19:15:13.491000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:15:12.441594 systemd[1]: Stopped target local-fs.target. Feb 12 19:15:13.500000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:15:12.461289 systemd[1]: Stopped target local-fs-pre.target. Feb 12 19:15:13.511000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:15:13.511000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:15:12.480422 systemd[1]: Stopped target swap.target. Feb 12 19:15:12.496098 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Feb 12 19:15:13.525000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:15:12.496161 systemd[1]: Stopped dracut-pre-mount.service. Feb 12 19:15:12.509559 systemd[1]: Stopped target cryptsetup.target. Feb 12 19:15:12.542597 systemd[1]: dracut-initqueue.service: Deactivated successfully. Feb 12 19:15:12.542654 systemd[1]: Stopped dracut-initqueue.service. Feb 12 19:15:12.564773 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Feb 12 19:15:12.564818 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Feb 12 19:15:13.590357 systemd-journald[276]: Received SIGTERM from PID 1 (n/a). Feb 12 19:15:12.632849 systemd[1]: ignition-files.service: Deactivated successfully. Feb 12 19:15:12.632909 systemd[1]: Stopped ignition-files.service. Feb 12 19:15:12.638391 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Feb 12 19:15:12.638433 systemd[1]: Stopped flatcar-metadata-hostname.service. Feb 12 19:15:12.662984 systemd[1]: Stopping ignition-mount.service... Feb 12 19:15:12.703847 systemd[1]: Stopping iscsid.service... Feb 12 19:15:12.711906 systemd[1]: Stopping sysroot-boot.service... Feb 12 19:15:12.723831 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Feb 12 19:15:12.723914 systemd[1]: Stopped systemd-udev-trigger.service. Feb 12 19:15:12.731805 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Feb 12 19:15:12.731903 systemd[1]: Stopped dracut-pre-trigger.service. Feb 12 19:15:12.754574 systemd[1]: iscsid.service: Deactivated successfully. Feb 12 19:15:12.754687 systemd[1]: Stopped iscsid.service. Feb 12 19:15:12.772686 systemd[1]: initrd-cleanup.service: Deactivated successfully. Feb 12 19:15:12.772776 systemd[1]: Finished initrd-cleanup.service. Feb 12 19:15:12.798482 systemd[1]: ignition-mount.service: Deactivated successfully. Feb 12 19:15:12.798576 systemd[1]: Stopped ignition-mount.service. Feb 12 19:15:12.828848 systemd[1]: sysroot-boot.mount: Deactivated successfully. Feb 12 19:15:12.829195 systemd[1]: ignition-disks.service: Deactivated successfully. Feb 12 19:15:12.829238 systemd[1]: Stopped ignition-disks.service. Feb 12 19:15:12.850500 systemd[1]: ignition-kargs.service: Deactivated successfully. Feb 12 19:15:12.850554 systemd[1]: Stopped ignition-kargs.service. Feb 12 19:15:12.861812 systemd[1]: ignition-fetch.service: Deactivated successfully. Feb 12 19:15:12.861855 systemd[1]: Stopped ignition-fetch.service. Feb 12 19:15:12.888724 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Feb 12 19:15:12.888778 systemd[1]: Stopped ignition-fetch-offline.service. Feb 12 19:15:12.894089 systemd[1]: Stopped target paths.target. Feb 12 19:15:12.910155 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Feb 12 19:15:12.933761 systemd[1]: Stopped systemd-ask-password-console.path. Feb 12 19:15:12.939067 systemd[1]: Stopped target slices.target. Feb 12 19:15:12.949170 systemd[1]: Stopped target sockets.target. Feb 12 19:15:12.959053 systemd[1]: iscsid.socket: Deactivated successfully. Feb 12 19:15:12.959107 systemd[1]: Closed iscsid.socket. Feb 12 19:15:12.967133 systemd[1]: ignition-setup.service: Deactivated successfully. Feb 12 19:15:12.967179 systemd[1]: Stopped ignition-setup.service. Feb 12 19:15:12.978447 systemd[1]: Stopping iscsiuio.service... Feb 12 19:15:12.993211 systemd[1]: iscsiuio.service: Deactivated successfully. Feb 12 19:15:12.993310 systemd[1]: Stopped iscsiuio.service. Feb 12 19:15:13.018334 systemd[1]: Stopped target network.target. Feb 12 19:15:13.117492 systemd[1]: iscsiuio.socket: Deactivated successfully. Feb 12 19:15:13.117567 systemd[1]: Closed iscsiuio.socket. Feb 12 19:15:13.176710 systemd[1]: Stopping systemd-networkd.service... Feb 12 19:15:13.191166 systemd[1]: Stopping systemd-resolved.service... Feb 12 19:15:13.207422 systemd-networkd[867]: eth0: DHCPv6 lease lost Feb 12 19:15:13.591000 audit: BPF prog-id=9 op=UNLOAD Feb 12 19:15:13.207823 systemd[1]: sysroot-boot.service: Deactivated successfully. Feb 12 19:15:13.207909 systemd[1]: Stopped sysroot-boot.service. Feb 12 19:15:13.219143 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 12 19:15:13.219220 systemd[1]: Stopped systemd-networkd.service. Feb 12 19:15:13.233348 systemd[1]: systemd-resolved.service: Deactivated successfully. Feb 12 19:15:13.233462 systemd[1]: Stopped systemd-resolved.service. Feb 12 19:15:13.247273 systemd[1]: systemd-networkd.socket: Deactivated successfully. Feb 12 19:15:13.247315 systemd[1]: Closed systemd-networkd.socket. Feb 12 19:15:13.257837 systemd[1]: initrd-setup-root.service: Deactivated successfully. Feb 12 19:15:13.257889 systemd[1]: Stopped initrd-setup-root.service. Feb 12 19:15:13.294172 systemd[1]: Stopping network-cleanup.service... Feb 12 19:15:13.316304 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Feb 12 19:15:13.316415 systemd[1]: Stopped parse-ip-for-networkd.service. Feb 12 19:15:13.333130 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 12 19:15:13.333185 systemd[1]: Stopped systemd-sysctl.service. Feb 12 19:15:13.347641 systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 12 19:15:13.347691 systemd[1]: Stopped systemd-modules-load.service. Feb 12 19:15:13.353844 systemd[1]: Stopping systemd-udevd.service... Feb 12 19:15:13.363903 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Feb 12 19:15:13.369548 systemd[1]: systemd-udevd.service: Deactivated successfully. Feb 12 19:15:13.369695 systemd[1]: Stopped systemd-udevd.service. Feb 12 19:15:13.379354 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Feb 12 19:15:13.379431 systemd[1]: Closed systemd-udevd-control.socket. Feb 12 19:15:13.398693 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Feb 12 19:15:13.398728 systemd[1]: Closed systemd-udevd-kernel.socket. Feb 12 19:15:13.408771 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Feb 12 19:15:13.408819 systemd[1]: Stopped dracut-pre-udev.service. Feb 12 19:15:13.429494 systemd[1]: dracut-cmdline.service: Deactivated successfully. Feb 12 19:15:13.429549 systemd[1]: Stopped dracut-cmdline.service. Feb 12 19:15:13.438520 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 12 19:15:13.438565 systemd[1]: Stopped dracut-cmdline-ask.service. Feb 12 19:15:13.458107 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Feb 12 19:15:13.470647 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Feb 12 19:15:13.470761 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service. Feb 12 19:15:13.486335 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Feb 12 19:15:13.486462 systemd[1]: Stopped kmod-static-nodes.service. Feb 12 19:15:13.491688 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 12 19:15:13.491743 systemd[1]: Stopped systemd-vconsole-setup.service. Feb 12 19:15:13.502299 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Feb 12 19:15:13.502870 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Feb 12 19:15:13.502978 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Feb 12 19:15:13.516839 systemd[1]: network-cleanup.service: Deactivated successfully. Feb 12 19:15:13.516945 systemd[1]: Stopped network-cleanup.service. Feb 12 19:15:13.526418 systemd[1]: Reached target initrd-switch-root.target. Feb 12 19:15:13.538840 systemd[1]: Starting initrd-switch-root.service... Feb 12 19:15:13.557437 systemd[1]: Switching root. Feb 12 19:15:13.592609 systemd-journald[276]: Journal stopped Feb 12 19:15:26.017144 kernel: SELinux: Class mctp_socket not defined in policy. Feb 12 19:15:26.017163 kernel: SELinux: Class anon_inode not defined in policy. Feb 12 19:15:26.017173 kernel: SELinux: the above unknown classes and permissions will be allowed Feb 12 19:15:26.017183 kernel: SELinux: policy capability network_peer_controls=1 Feb 12 19:15:26.017191 kernel: SELinux: policy capability open_perms=1 Feb 12 19:15:26.017198 kernel: SELinux: policy capability extended_socket_class=1 Feb 12 19:15:26.017207 kernel: SELinux: policy capability always_check_network=0 Feb 12 19:15:26.017216 kernel: SELinux: policy capability cgroup_seclabel=1 Feb 12 19:15:26.017224 kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 12 19:15:26.017232 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Feb 12 19:15:26.017240 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Feb 12 19:15:26.017250 systemd[1]: Successfully loaded SELinux policy in 328.060ms. Feb 12 19:15:26.017260 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 28.357ms. Feb 12 19:15:26.017270 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Feb 12 19:15:26.017281 systemd[1]: Detected virtualization microsoft. Feb 12 19:15:26.017290 systemd[1]: Detected architecture arm64. Feb 12 19:15:26.017299 systemd[1]: Detected first boot. Feb 12 19:15:26.017308 systemd[1]: Hostname set to . Feb 12 19:15:26.017317 systemd[1]: Initializing machine ID from random generator. Feb 12 19:15:26.017327 kernel: kauditd_printk_skb: 35 callbacks suppressed Feb 12 19:15:26.017336 kernel: audit: type=1400 audit(1707765316.969:83): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Feb 12 19:15:26.017345 kernel: audit: type=1400 audit(1707765316.969:84): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Feb 12 19:15:26.017355 kernel: audit: type=1334 audit(1707765316.975:85): prog-id=10 op=LOAD Feb 12 19:15:26.017364 kernel: audit: type=1334 audit(1707765316.975:86): prog-id=10 op=UNLOAD Feb 12 19:15:26.017372 kernel: audit: type=1334 audit(1707765316.994:87): prog-id=11 op=LOAD Feb 12 19:15:26.017392 kernel: audit: type=1334 audit(1707765316.994:88): prog-id=11 op=UNLOAD Feb 12 19:15:26.017401 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Feb 12 19:15:26.017410 kernel: audit: type=1400 audit(1707765318.291:89): avc: denied { associate } for pid=1099 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Feb 12 19:15:26.017421 kernel: audit: type=1300 audit(1707765318.291:89): arch=c00000b7 syscall=5 success=yes exit=0 a0=400014589c a1=40000c6df8 a2=40000cd0c0 a3=32 items=0 ppid=1082 pid=1099 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:15:26.017430 kernel: audit: type=1327 audit(1707765318.291:89): proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Feb 12 19:15:26.017445 kernel: audit: type=1400 audit(1707765318.325:90): avc: denied { associate } for pid=1099 comm="torcx-generator" name="bin" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Feb 12 19:15:26.017454 systemd[1]: Populated /etc with preset unit settings. Feb 12 19:15:26.017463 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 12 19:15:26.017472 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 12 19:15:26.017483 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 12 19:15:26.017493 kernel: kauditd_printk_skb: 5 callbacks suppressed Feb 12 19:15:26.017501 kernel: audit: type=1334 audit(1707765325.238:91): prog-id=12 op=LOAD Feb 12 19:15:26.017510 systemd[1]: initrd-switch-root.service: Deactivated successfully. Feb 12 19:15:26.017518 kernel: audit: type=1334 audit(1707765325.238:92): prog-id=3 op=UNLOAD Feb 12 19:15:26.017527 systemd[1]: Stopped initrd-switch-root.service. Feb 12 19:15:26.017538 kernel: audit: type=1334 audit(1707765325.238:93): prog-id=13 op=LOAD Feb 12 19:15:26.017547 kernel: audit: type=1334 audit(1707765325.238:94): prog-id=14 op=LOAD Feb 12 19:15:26.017557 kernel: audit: type=1334 audit(1707765325.238:95): prog-id=4 op=UNLOAD Feb 12 19:15:26.017566 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Feb 12 19:15:26.017574 kernel: audit: type=1334 audit(1707765325.238:96): prog-id=5 op=UNLOAD Feb 12 19:15:26.017583 kernel: audit: type=1334 audit(1707765325.239:97): prog-id=15 op=LOAD Feb 12 19:15:26.017592 kernel: audit: type=1334 audit(1707765325.239:98): prog-id=12 op=UNLOAD Feb 12 19:15:26.017600 systemd[1]: Created slice system-addon\x2dconfig.slice. Feb 12 19:15:26.017609 kernel: audit: type=1334 audit(1707765325.239:99): prog-id=16 op=LOAD Feb 12 19:15:26.017618 kernel: audit: type=1334 audit(1707765325.239:100): prog-id=17 op=LOAD Feb 12 19:15:26.017627 systemd[1]: Created slice system-addon\x2drun.slice. Feb 12 19:15:26.017637 systemd[1]: Created slice system-getty.slice. Feb 12 19:15:26.017646 systemd[1]: Created slice system-modprobe.slice. Feb 12 19:15:26.017656 systemd[1]: Created slice system-serial\x2dgetty.slice. Feb 12 19:15:26.017665 systemd[1]: Created slice system-system\x2dcloudinit.slice. Feb 12 19:15:26.017674 systemd[1]: Created slice system-systemd\x2dfsck.slice. Feb 12 19:15:26.017683 systemd[1]: Created slice user.slice. Feb 12 19:15:26.017692 systemd[1]: Started systemd-ask-password-console.path. Feb 12 19:15:26.017701 systemd[1]: Started systemd-ask-password-wall.path. Feb 12 19:15:26.017711 systemd[1]: Set up automount boot.automount. Feb 12 19:15:26.017721 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Feb 12 19:15:26.017732 systemd[1]: Stopped target initrd-switch-root.target. Feb 12 19:15:26.017741 systemd[1]: Stopped target initrd-fs.target. Feb 12 19:15:26.017750 systemd[1]: Stopped target initrd-root-fs.target. Feb 12 19:15:26.017759 systemd[1]: Reached target integritysetup.target. Feb 12 19:15:26.017768 systemd[1]: Reached target remote-cryptsetup.target. Feb 12 19:15:26.017777 systemd[1]: Reached target remote-fs.target. Feb 12 19:15:26.017787 systemd[1]: Reached target slices.target. Feb 12 19:15:26.017797 systemd[1]: Reached target swap.target. Feb 12 19:15:26.017806 systemd[1]: Reached target torcx.target. Feb 12 19:15:26.017814 systemd[1]: Reached target veritysetup.target. Feb 12 19:15:26.017824 systemd[1]: Listening on systemd-coredump.socket. Feb 12 19:15:26.017833 systemd[1]: Listening on systemd-initctl.socket. Feb 12 19:15:26.017842 systemd[1]: Listening on systemd-networkd.socket. Feb 12 19:15:26.017853 systemd[1]: Listening on systemd-udevd-control.socket. Feb 12 19:15:26.017862 systemd[1]: Listening on systemd-udevd-kernel.socket. Feb 12 19:15:26.017871 systemd[1]: Listening on systemd-userdbd.socket. Feb 12 19:15:26.017881 systemd[1]: Mounting dev-hugepages.mount... Feb 12 19:15:26.017890 systemd[1]: Mounting dev-mqueue.mount... Feb 12 19:15:26.017899 systemd[1]: Mounting media.mount... Feb 12 19:15:26.017908 systemd[1]: Mounting sys-kernel-debug.mount... Feb 12 19:15:26.017919 systemd[1]: Mounting sys-kernel-tracing.mount... Feb 12 19:15:26.017932 systemd[1]: Mounting tmp.mount... Feb 12 19:15:26.017941 systemd[1]: Starting flatcar-tmpfiles.service... Feb 12 19:15:26.017950 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Feb 12 19:15:26.017959 systemd[1]: Starting kmod-static-nodes.service... Feb 12 19:15:26.017969 systemd[1]: Starting modprobe@configfs.service... Feb 12 19:15:26.017978 systemd[1]: Starting modprobe@dm_mod.service... Feb 12 19:15:26.017987 systemd[1]: Starting modprobe@drm.service... Feb 12 19:15:26.017996 systemd[1]: Starting modprobe@efi_pstore.service... Feb 12 19:15:26.018007 systemd[1]: Starting modprobe@fuse.service... Feb 12 19:15:26.018016 systemd[1]: Starting modprobe@loop.service... Feb 12 19:15:26.018025 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Feb 12 19:15:26.018035 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Feb 12 19:15:26.018044 systemd[1]: Stopped systemd-fsck-root.service. Feb 12 19:15:26.018053 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Feb 12 19:15:26.018062 kernel: loop: module loaded Feb 12 19:15:26.018071 systemd[1]: Stopped systemd-fsck-usr.service. Feb 12 19:15:26.018080 systemd[1]: Stopped systemd-journald.service. Feb 12 19:15:26.018090 kernel: fuse: init (API version 7.34) Feb 12 19:15:26.018099 systemd[1]: systemd-journald.service: Consumed 3.903s CPU time. Feb 12 19:15:26.018108 systemd[1]: Starting systemd-journald.service... Feb 12 19:15:26.018118 systemd[1]: Starting systemd-modules-load.service... Feb 12 19:15:26.018128 systemd[1]: Starting systemd-network-generator.service... Feb 12 19:15:26.018138 systemd[1]: Starting systemd-remount-fs.service... Feb 12 19:15:26.018147 systemd[1]: Starting systemd-udev-trigger.service... Feb 12 19:15:26.018156 systemd[1]: verity-setup.service: Deactivated successfully. Feb 12 19:15:26.018165 systemd[1]: Stopped verity-setup.service. Feb 12 19:15:26.018176 systemd[1]: Mounted dev-hugepages.mount. Feb 12 19:15:26.018185 systemd[1]: Mounted dev-mqueue.mount. Feb 12 19:15:26.018194 systemd[1]: Mounted media.mount. Feb 12 19:15:26.018203 systemd[1]: Mounted sys-kernel-debug.mount. Feb 12 19:15:26.018215 systemd-journald[1205]: Journal started Feb 12 19:15:26.018251 systemd-journald[1205]: Runtime Journal (/run/log/journal/28bb95937fb246f58a8a0c723771404a) is 8.0M, max 78.6M, 70.6M free. Feb 12 19:15:16.236000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 Feb 12 19:15:16.969000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Feb 12 19:15:16.969000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Feb 12 19:15:16.975000 audit: BPF prog-id=10 op=LOAD Feb 12 19:15:16.975000 audit: BPF prog-id=10 op=UNLOAD Feb 12 19:15:16.994000 audit: BPF prog-id=11 op=LOAD Feb 12 19:15:16.994000 audit: BPF prog-id=11 op=UNLOAD Feb 12 19:15:18.291000 audit[1099]: AVC avc: denied { associate } for pid=1099 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Feb 12 19:15:18.291000 audit[1099]: SYSCALL arch=c00000b7 syscall=5 success=yes exit=0 a0=400014589c a1=40000c6df8 a2=40000cd0c0 a3=32 items=0 ppid=1082 pid=1099 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:15:18.291000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Feb 12 19:15:18.325000 audit[1099]: AVC avc: denied { associate } for pid=1099 comm="torcx-generator" name="bin" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Feb 12 19:15:18.325000 audit[1099]: SYSCALL arch=c00000b7 syscall=34 success=yes exit=0 a0=ffffffffffffff9c a1=4000145975 a2=1ed a3=0 items=2 ppid=1082 pid=1099 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:15:18.325000 audit: CWD cwd="/" Feb 12 19:15:18.325000 audit: PATH item=0 name=(null) inode=2 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:15:18.325000 audit: PATH item=1 name=(null) inode=3 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:15:18.325000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Feb 12 19:15:25.238000 audit: BPF prog-id=12 op=LOAD Feb 12 19:15:25.238000 audit: BPF prog-id=3 op=UNLOAD Feb 12 19:15:25.238000 audit: BPF prog-id=13 op=LOAD Feb 12 19:15:25.238000 audit: BPF prog-id=14 op=LOAD Feb 12 19:15:25.238000 audit: BPF prog-id=4 op=UNLOAD Feb 12 19:15:25.238000 audit: BPF prog-id=5 op=UNLOAD Feb 12 19:15:25.239000 audit: BPF prog-id=15 op=LOAD Feb 12 19:15:25.239000 audit: BPF prog-id=12 op=UNLOAD Feb 12 19:15:25.239000 audit: BPF prog-id=16 op=LOAD Feb 12 19:15:25.239000 audit: BPF prog-id=17 op=LOAD Feb 12 19:15:25.239000 audit: BPF prog-id=13 op=UNLOAD Feb 12 19:15:25.239000 audit: BPF prog-id=14 op=UNLOAD Feb 12 19:15:25.240000 audit: BPF prog-id=18 op=LOAD Feb 12 19:15:25.240000 audit: BPF prog-id=15 op=UNLOAD Feb 12 19:15:25.240000 audit: BPF prog-id=19 op=LOAD Feb 12 19:15:25.240000 audit: BPF prog-id=20 op=LOAD Feb 12 19:15:25.240000 audit: BPF prog-id=16 op=UNLOAD Feb 12 19:15:25.240000 audit: BPF prog-id=17 op=UNLOAD Feb 12 19:15:25.242000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:15:25.276000 audit: BPF prog-id=18 op=UNLOAD Feb 12 19:15:25.276000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:15:25.276000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:15:25.866000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:15:25.888000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:15:25.906000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:15:25.906000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:15:25.914000 audit: BPF prog-id=21 op=LOAD Feb 12 19:15:25.914000 audit: BPF prog-id=22 op=LOAD Feb 12 19:15:25.914000 audit: BPF prog-id=23 op=LOAD Feb 12 19:15:25.914000 audit: BPF prog-id=19 op=UNLOAD Feb 12 19:15:25.914000 audit: BPF prog-id=20 op=UNLOAD Feb 12 19:15:25.988000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:15:26.011000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Feb 12 19:15:26.011000 audit[1205]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=60 a0=6 a1=fffffc6f2d90 a2=4000 a3=1 items=0 ppid=1 pid=1205 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:15:26.011000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Feb 12 19:15:25.237839 systemd[1]: Queued start job for default target multi-user.target. Feb 12 19:15:18.260464 /usr/lib/systemd/system-generators/torcx-generator[1099]: time="2024-02-12T19:15:18Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 12 19:15:25.241179 systemd[1]: systemd-journald.service: Deactivated successfully. Feb 12 19:15:18.260916 /usr/lib/systemd/system-generators/torcx-generator[1099]: time="2024-02-12T19:15:18Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Feb 12 19:15:25.242638 systemd[1]: systemd-journald.service: Consumed 3.903s CPU time. Feb 12 19:15:18.260935 /usr/lib/systemd/system-generators/torcx-generator[1099]: time="2024-02-12T19:15:18Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Feb 12 19:15:18.260971 /usr/lib/systemd/system-generators/torcx-generator[1099]: time="2024-02-12T19:15:18Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" Feb 12 19:15:18.260981 /usr/lib/systemd/system-generators/torcx-generator[1099]: time="2024-02-12T19:15:18Z" level=debug msg="skipped missing lower profile" missing profile=oem Feb 12 19:15:18.261010 /usr/lib/systemd/system-generators/torcx-generator[1099]: time="2024-02-12T19:15:18Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" Feb 12 19:15:18.261021 /usr/lib/systemd/system-generators/torcx-generator[1099]: time="2024-02-12T19:15:18Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= Feb 12 19:15:18.261215 /usr/lib/systemd/system-generators/torcx-generator[1099]: time="2024-02-12T19:15:18Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack Feb 12 19:15:18.261246 /usr/lib/systemd/system-generators/torcx-generator[1099]: time="2024-02-12T19:15:18Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Feb 12 19:15:18.261258 /usr/lib/systemd/system-generators/torcx-generator[1099]: time="2024-02-12T19:15:18Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Feb 12 19:15:18.276261 /usr/lib/systemd/system-generators/torcx-generator[1099]: time="2024-02-12T19:15:18Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 Feb 12 19:15:18.276296 /usr/lib/systemd/system-generators/torcx-generator[1099]: time="2024-02-12T19:15:18Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl Feb 12 19:15:18.276315 /usr/lib/systemd/system-generators/torcx-generator[1099]: time="2024-02-12T19:15:18Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.2: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.2 Feb 12 19:15:18.276328 /usr/lib/systemd/system-generators/torcx-generator[1099]: time="2024-02-12T19:15:18Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store Feb 12 19:15:18.276345 /usr/lib/systemd/system-generators/torcx-generator[1099]: time="2024-02-12T19:15:18Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.2: no such file or directory" path=/var/lib/torcx/store/3510.3.2 Feb 12 19:15:18.276359 /usr/lib/systemd/system-generators/torcx-generator[1099]: time="2024-02-12T19:15:18Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store Feb 12 19:15:24.219167 /usr/lib/systemd/system-generators/torcx-generator[1099]: time="2024-02-12T19:15:24Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 12 19:15:24.219501 /usr/lib/systemd/system-generators/torcx-generator[1099]: time="2024-02-12T19:15:24Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 12 19:15:24.219625 /usr/lib/systemd/system-generators/torcx-generator[1099]: time="2024-02-12T19:15:24Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 12 19:15:24.219788 /usr/lib/systemd/system-generators/torcx-generator[1099]: time="2024-02-12T19:15:24Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 12 19:15:24.219849 /usr/lib/systemd/system-generators/torcx-generator[1099]: time="2024-02-12T19:15:24Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= Feb 12 19:15:24.219911 /usr/lib/systemd/system-generators/torcx-generator[1099]: time="2024-02-12T19:15:24Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx Feb 12 19:15:26.035908 systemd[1]: Started systemd-journald.service. Feb 12 19:15:26.035000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:15:26.036708 systemd[1]: Mounted sys-kernel-tracing.mount. Feb 12 19:15:26.042450 systemd[1]: Mounted tmp.mount. Feb 12 19:15:26.046646 systemd[1]: Finished flatcar-tmpfiles.service. Feb 12 19:15:26.051000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:15:26.051903 systemd[1]: Finished kmod-static-nodes.service. Feb 12 19:15:26.056000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:15:26.057554 systemd[1]: modprobe@configfs.service: Deactivated successfully. Feb 12 19:15:26.057686 systemd[1]: Finished modprobe@configfs.service. Feb 12 19:15:26.062000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:15:26.062000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:15:26.063173 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 12 19:15:26.063281 systemd[1]: Finished modprobe@dm_mod.service. Feb 12 19:15:26.068000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:15:26.068000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:15:26.068950 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 12 19:15:26.069066 systemd[1]: Finished modprobe@drm.service. Feb 12 19:15:26.075000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:15:26.075000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:15:26.075894 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 12 19:15:26.076018 systemd[1]: Finished modprobe@efi_pstore.service. Feb 12 19:15:26.081000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:15:26.081000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:15:26.081790 systemd[1]: modprobe@fuse.service: Deactivated successfully. Feb 12 19:15:26.081912 systemd[1]: Finished modprobe@fuse.service. Feb 12 19:15:26.086000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:15:26.086000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:15:26.087568 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 12 19:15:26.087912 systemd[1]: Finished modprobe@loop.service. Feb 12 19:15:26.101000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:15:26.101000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:15:26.103215 systemd[1]: Finished systemd-modules-load.service. Feb 12 19:15:26.108000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:15:26.109319 systemd[1]: Finished systemd-network-generator.service. Feb 12 19:15:26.114000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:15:26.115210 systemd[1]: Finished systemd-remount-fs.service. Feb 12 19:15:26.120000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:15:26.121268 systemd[1]: Finished systemd-udev-trigger.service. Feb 12 19:15:26.126000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:15:26.127074 systemd[1]: Reached target network-pre.target. Feb 12 19:15:26.132976 systemd[1]: Mounting sys-fs-fuse-connections.mount... Feb 12 19:15:26.139645 systemd[1]: Mounting sys-kernel-config.mount... Feb 12 19:15:26.144023 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Feb 12 19:15:26.159520 systemd[1]: Starting systemd-hwdb-update.service... Feb 12 19:15:26.165708 systemd[1]: Starting systemd-journal-flush.service... Feb 12 19:15:26.170892 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 12 19:15:26.172129 systemd[1]: Starting systemd-random-seed.service... Feb 12 19:15:26.177006 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Feb 12 19:15:26.178202 systemd[1]: Starting systemd-sysctl.service... Feb 12 19:15:26.184395 systemd[1]: Starting systemd-sysusers.service... Feb 12 19:15:26.190114 systemd[1]: Starting systemd-udev-settle.service... Feb 12 19:15:26.197024 systemd[1]: Mounted sys-fs-fuse-connections.mount. Feb 12 19:15:26.202340 systemd[1]: Mounted sys-kernel-config.mount. Feb 12 19:15:26.208738 udevadm[1219]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Feb 12 19:15:26.235783 systemd[1]: Finished systemd-random-seed.service. Feb 12 19:15:26.240758 systemd-journald[1205]: Time spent on flushing to /var/log/journal/28bb95937fb246f58a8a0c723771404a is 14.594ms for 1148 entries. Feb 12 19:15:26.240758 systemd-journald[1205]: System Journal (/var/log/journal/28bb95937fb246f58a8a0c723771404a) is 8.0M, max 2.6G, 2.6G free. Feb 12 19:15:26.301974 systemd-journald[1205]: Received client request to flush runtime journal. Feb 12 19:15:26.240000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:15:26.260000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:15:26.240910 systemd[1]: Reached target first-boot-complete.target. Feb 12 19:15:26.256279 systemd[1]: Finished systemd-sysctl.service. Feb 12 19:15:26.302884 systemd[1]: Finished systemd-journal-flush.service. Feb 12 19:15:26.308000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:15:26.874908 systemd[1]: Finished systemd-sysusers.service. Feb 12 19:15:26.879000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:15:26.881533 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Feb 12 19:15:27.174094 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Feb 12 19:15:27.179000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:15:27.258484 systemd[1]: Finished systemd-hwdb-update.service. Feb 12 19:15:27.264000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:15:27.264000 audit: BPF prog-id=24 op=LOAD Feb 12 19:15:27.264000 audit: BPF prog-id=25 op=LOAD Feb 12 19:15:27.264000 audit: BPF prog-id=7 op=UNLOAD Feb 12 19:15:27.264000 audit: BPF prog-id=8 op=UNLOAD Feb 12 19:15:27.265963 systemd[1]: Starting systemd-udevd.service... Feb 12 19:15:27.284475 systemd-udevd[1224]: Using default interface naming scheme 'v252'. Feb 12 19:15:27.521261 systemd[1]: Started systemd-udevd.service. Feb 12 19:15:27.530000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:15:27.531000 audit: BPF prog-id=26 op=LOAD Feb 12 19:15:27.533556 systemd[1]: Starting systemd-networkd.service... Feb 12 19:15:27.560609 systemd[1]: Condition check resulted in dev-ttyAMA0.device being skipped. Feb 12 19:15:27.579000 audit: BPF prog-id=27 op=LOAD Feb 12 19:15:27.579000 audit: BPF prog-id=28 op=LOAD Feb 12 19:15:27.579000 audit: BPF prog-id=29 op=LOAD Feb 12 19:15:27.580329 systemd[1]: Starting systemd-userdbd.service... Feb 12 19:15:27.637418 kernel: mousedev: PS/2 mouse device common for all mice Feb 12 19:15:27.641000 audit[1241]: AVC avc: denied { confidentiality } for pid=1241 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Feb 12 19:15:27.652469 systemd[1]: Started systemd-userdbd.service. Feb 12 19:15:27.657000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:15:27.674347 kernel: hv_vmbus: registering driver hv_balloon Feb 12 19:15:27.674463 kernel: hv_vmbus: registering driver hyperv_fb Feb 12 19:15:27.674481 kernel: hv_utils: Registering HyperV Utility Driver Feb 12 19:15:27.679731 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Feb 12 19:15:27.680500 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Feb 12 19:15:27.680540 kernel: hv_balloon: Memory hot add disabled on ARM64 Feb 12 19:15:27.680554 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Feb 12 19:15:27.696417 kernel: hv_vmbus: registering driver hv_utils Feb 12 19:15:27.696523 kernel: hv_utils: Heartbeat IC version 3.0 Feb 12 19:15:27.696559 kernel: hv_utils: Shutdown IC version 3.2 Feb 12 19:15:27.696576 kernel: hv_utils: TimeSync IC version 4.0 Feb 12 19:15:27.397017 kernel: Console: switching to colour dummy device 80x25 Feb 12 19:15:27.444934 systemd-journald[1205]: Time jumped backwards, rotating. Feb 12 19:15:27.445016 kernel: Console: switching to colour frame buffer device 128x48 Feb 12 19:15:27.641000 audit[1241]: SYSCALL arch=c00000b7 syscall=105 success=yes exit=0 a0=aaaac235f3c0 a1=aa2c a2=ffffacf624b0 a3=aaaac22b9010 items=12 ppid=1224 pid=1241 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:15:27.641000 audit: CWD cwd="/" Feb 12 19:15:27.641000 audit: PATH item=0 name=(null) inode=5889 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:15:27.641000 audit: PATH item=1 name=(null) inode=10738 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:15:27.641000 audit: PATH item=2 name=(null) inode=10738 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:15:27.641000 audit: PATH item=3 name=(null) inode=10739 dev=00:0a mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:15:27.641000 audit: PATH item=4 name=(null) inode=10738 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:15:27.641000 audit: PATH item=5 name=(null) inode=10740 dev=00:0a mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:15:27.641000 audit: PATH item=6 name=(null) inode=10738 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:15:27.641000 audit: PATH item=7 name=(null) inode=10741 dev=00:0a mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:15:27.641000 audit: PATH item=8 name=(null) inode=10738 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:15:27.641000 audit: PATH item=9 name=(null) inode=10742 dev=00:0a mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:15:27.641000 audit: PATH item=10 name=(null) inode=10738 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:15:27.641000 audit: PATH item=11 name=(null) inode=10743 dev=00:0a mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:15:27.641000 audit: PROCTITLE proctitle="(udev-worker)" Feb 12 19:15:27.598427 systemd-networkd[1245]: lo: Link UP Feb 12 19:15:27.598440 systemd-networkd[1245]: lo: Gained carrier Feb 12 19:15:27.598874 systemd-networkd[1245]: Enumeration completed Feb 12 19:15:27.598978 systemd[1]: Started systemd-networkd.service. Feb 12 19:15:27.603000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:15:27.605768 systemd[1]: Starting systemd-networkd-wait-online.service... Feb 12 19:15:27.630117 systemd-networkd[1245]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 12 19:15:27.637543 kernel: BTRFS info: devid 1 device path /dev/disk/by-label/OEM changed to /dev/sda6 scanned by (udev-worker) (1235) Feb 12 19:15:27.659623 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Feb 12 19:15:27.669880 systemd[1]: Finished systemd-udev-settle.service. Feb 12 19:15:27.673000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:15:27.681778 systemd[1]: Starting lvm2-activation-early.service... Feb 12 19:15:27.682604 kernel: mlx5_core c857:00:02.0 enP51287s1: Link up Feb 12 19:15:27.710534 kernel: hv_netvsc 0022487e-2935-0022-487e-29350022487e eth0: Data path switched to VF: enP51287s1 Feb 12 19:15:27.710975 systemd-networkd[1245]: enP51287s1: Link UP Feb 12 19:15:27.711086 systemd-networkd[1245]: eth0: Link UP Feb 12 19:15:27.711089 systemd-networkd[1245]: eth0: Gained carrier Feb 12 19:15:27.715748 systemd-networkd[1245]: enP51287s1: Gained carrier Feb 12 19:15:27.735613 systemd-networkd[1245]: eth0: DHCPv4 address 10.200.20.31/24, gateway 10.200.20.1 acquired from 168.63.129.16 Feb 12 19:15:28.530014 lvm[1304]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 12 19:15:28.575377 systemd[1]: Finished lvm2-activation-early.service. Feb 12 19:15:28.579000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:15:28.581167 systemd[1]: Reached target cryptsetup.target. Feb 12 19:15:28.586994 systemd[1]: Starting lvm2-activation.service... Feb 12 19:15:28.590907 lvm[1306]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 12 19:15:28.613410 systemd[1]: Finished lvm2-activation.service. Feb 12 19:15:28.616000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:15:28.618258 systemd[1]: Reached target local-fs-pre.target. Feb 12 19:15:28.623064 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Feb 12 19:15:28.623090 systemd[1]: Reached target local-fs.target. Feb 12 19:15:28.627697 systemd[1]: Reached target machines.target. Feb 12 19:15:28.633691 systemd[1]: Starting ldconfig.service... Feb 12 19:15:28.638341 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Feb 12 19:15:28.638409 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 12 19:15:28.639692 systemd[1]: Starting systemd-boot-update.service... Feb 12 19:15:28.645418 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Feb 12 19:15:28.652450 systemd[1]: Starting systemd-machine-id-commit.service... Feb 12 19:15:28.657659 systemd[1]: systemd-sysext.service was skipped because no trigger condition checks were met. Feb 12 19:15:28.657719 systemd[1]: ensure-sysext.service was skipped because no trigger condition checks were met. Feb 12 19:15:28.658885 systemd[1]: Starting systemd-tmpfiles-setup.service... Feb 12 19:15:28.671929 systemd-tmpfiles[1311]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Feb 12 19:15:28.687779 systemd-tmpfiles[1311]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Feb 12 19:15:28.689230 systemd-tmpfiles[1311]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Feb 12 19:15:28.789767 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1308 (bootctl) Feb 12 19:15:28.791007 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Feb 12 19:15:28.835254 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Feb 12 19:15:28.840000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:15:28.898457 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Feb 12 19:15:28.899676 systemd[1]: Finished systemd-machine-id-commit.service. Feb 12 19:15:28.903000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:15:28.915636 systemd-networkd[1245]: eth0: Gained IPv6LL Feb 12 19:15:28.920472 systemd[1]: Finished systemd-networkd-wait-online.service. Feb 12 19:15:28.925000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd-wait-online comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:15:29.021820 systemd-fsck[1316]: fsck.fat 4.2 (2021-01-31) Feb 12 19:15:29.021820 systemd-fsck[1316]: /dev/sda1: 236 files, 113719/258078 clusters Feb 12 19:15:29.023841 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Feb 12 19:15:29.029000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:15:29.034277 systemd[1]: Mounting boot.mount... Feb 12 19:15:29.042784 systemd[1]: Mounted boot.mount. Feb 12 19:15:29.054112 systemd[1]: Finished systemd-boot-update.service. Feb 12 19:15:29.059000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:15:29.307305 systemd[1]: Finished systemd-tmpfiles-setup.service. Feb 12 19:15:29.313000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:15:29.316756 systemd[1]: Starting audit-rules.service... Feb 12 19:15:29.322852 systemd[1]: Starting clean-ca-certificates.service... Feb 12 19:15:29.338247 systemd[1]: Starting systemd-journal-catalog-update.service... Feb 12 19:15:29.343000 audit: BPF prog-id=30 op=LOAD Feb 12 19:15:29.345470 systemd[1]: Starting systemd-resolved.service... Feb 12 19:15:29.349000 audit: BPF prog-id=31 op=LOAD Feb 12 19:15:29.352108 systemd[1]: Starting systemd-timesyncd.service... Feb 12 19:15:29.357945 systemd[1]: Starting systemd-update-utmp.service... Feb 12 19:15:29.399000 audit[1328]: SYSTEM_BOOT pid=1328 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Feb 12 19:15:29.403177 systemd[1]: Finished systemd-update-utmp.service. Feb 12 19:15:29.406000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:15:29.442911 systemd[1]: Started systemd-timesyncd.service. Feb 12 19:15:29.447000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-timesyncd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:15:29.448239 systemd[1]: Reached target time-set.target. Feb 12 19:15:29.457255 systemd[1]: Finished clean-ca-certificates.service. Feb 12 19:15:29.462000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:15:29.464111 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Feb 12 19:15:29.515619 systemd-resolved[1326]: Positive Trust Anchors: Feb 12 19:15:29.515633 systemd-resolved[1326]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 12 19:15:29.515659 systemd-resolved[1326]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Feb 12 19:15:29.539210 systemd[1]: Finished systemd-journal-catalog-update.service. Feb 12 19:15:29.543000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:15:29.567645 systemd-resolved[1326]: Using system hostname 'ci-3510.3.2-a-e08ac1c56f'. Feb 12 19:15:29.569231 systemd[1]: Started systemd-resolved.service. Feb 12 19:15:29.574000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:15:29.575900 systemd[1]: Reached target network.target. Feb 12 19:15:29.580852 systemd[1]: Reached target network-online.target. Feb 12 19:15:29.586336 systemd[1]: Reached target nss-lookup.target. Feb 12 19:15:29.751704 augenrules[1343]: No rules Feb 12 19:15:29.750000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Feb 12 19:15:29.750000 audit[1343]: SYSCALL arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=fffffd321520 a2=420 a3=0 items=0 ppid=1322 pid=1343 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:15:29.750000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Feb 12 19:15:29.752857 systemd[1]: Finished audit-rules.service. Feb 12 19:15:29.821650 systemd-timesyncd[1327]: Contacted time server 70.184.242.170:123 (0.flatcar.pool.ntp.org). Feb 12 19:15:29.821724 systemd-timesyncd[1327]: Initial clock synchronization to Mon 2024-02-12 19:15:29.824920 UTC. Feb 12 19:15:36.847014 ldconfig[1307]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Feb 12 19:15:36.863553 systemd[1]: Finished ldconfig.service. Feb 12 19:15:36.869730 systemd[1]: Starting systemd-update-done.service... Feb 12 19:15:36.903761 systemd[1]: Finished systemd-update-done.service. Feb 12 19:15:36.909052 systemd[1]: Reached target sysinit.target. Feb 12 19:15:36.913887 systemd[1]: Started motdgen.path. Feb 12 19:15:36.917807 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Feb 12 19:15:36.924507 systemd[1]: Started logrotate.timer. Feb 12 19:15:36.928789 systemd[1]: Started mdadm.timer. Feb 12 19:15:36.932745 systemd[1]: Started systemd-tmpfiles-clean.timer. Feb 12 19:15:36.938170 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Feb 12 19:15:36.938204 systemd[1]: Reached target paths.target. Feb 12 19:15:36.942989 systemd[1]: Reached target timers.target. Feb 12 19:15:36.949561 systemd[1]: Listening on dbus.socket. Feb 12 19:15:36.955387 systemd[1]: Starting docker.socket... Feb 12 19:15:36.962007 systemd[1]: Listening on sshd.socket. Feb 12 19:15:36.966797 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 12 19:15:36.967334 systemd[1]: Listening on docker.socket. Feb 12 19:15:36.972080 systemd[1]: Reached target sockets.target. Feb 12 19:15:36.977306 systemd[1]: Reached target basic.target. Feb 12 19:15:36.981835 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Feb 12 19:15:36.981865 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Feb 12 19:15:36.983059 systemd[1]: Starting containerd.service... Feb 12 19:15:36.988615 systemd[1]: Starting dbus.service... Feb 12 19:15:36.993596 systemd[1]: Starting enable-oem-cloudinit.service... Feb 12 19:15:36.999417 systemd[1]: Starting extend-filesystems.service... Feb 12 19:15:37.003961 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Feb 12 19:15:37.005159 systemd[1]: Starting motdgen.service... Feb 12 19:15:37.010173 systemd[1]: Started nvidia.service. Feb 12 19:15:37.015892 systemd[1]: Starting prepare-cni-plugins.service... Feb 12 19:15:37.021711 systemd[1]: Starting prepare-critools.service... Feb 12 19:15:37.027998 systemd[1]: Starting prepare-helm.service... Feb 12 19:15:37.033089 systemd[1]: Starting ssh-key-proc-cmdline.service... Feb 12 19:15:37.103127 systemd[1]: Starting sshd-keygen.service... Feb 12 19:15:37.110265 systemd[1]: Starting systemd-logind.service... Feb 12 19:15:37.118673 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 12 19:15:37.118743 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Feb 12 19:15:37.119239 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Feb 12 19:15:37.120062 systemd[1]: Starting update-engine.service... Feb 12 19:15:37.126293 systemd[1]: Starting update-ssh-keys-after-ignition.service... Feb 12 19:15:37.141624 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Feb 12 19:15:37.141815 systemd[1]: Finished ssh-key-proc-cmdline.service. Feb 12 19:15:37.150064 jq[1353]: false Feb 12 19:15:37.150280 jq[1373]: true Feb 12 19:15:37.150379 systemd[1]: motdgen.service: Deactivated successfully. Feb 12 19:15:37.150705 systemd[1]: Finished motdgen.service. Feb 12 19:15:37.168602 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Feb 12 19:15:37.168770 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Feb 12 19:15:37.199097 env[1382]: time="2024-02-12T19:15:37.199041499Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Feb 12 19:15:37.199510 extend-filesystems[1354]: Found sda Feb 12 19:15:37.204702 extend-filesystems[1354]: Found sda1 Feb 12 19:15:37.204702 extend-filesystems[1354]: Found sda2 Feb 12 19:15:37.204702 extend-filesystems[1354]: Found sda3 Feb 12 19:15:37.204702 extend-filesystems[1354]: Found usr Feb 12 19:15:37.204702 extend-filesystems[1354]: Found sda4 Feb 12 19:15:37.204702 extend-filesystems[1354]: Found sda6 Feb 12 19:15:37.204702 extend-filesystems[1354]: Found sda7 Feb 12 19:15:37.204702 extend-filesystems[1354]: Found sda9 Feb 12 19:15:37.204702 extend-filesystems[1354]: Checking size of /dev/sda9 Feb 12 19:15:37.279476 jq[1383]: true Feb 12 19:15:37.279610 env[1382]: time="2024-02-12T19:15:37.248571045Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Feb 12 19:15:37.279610 env[1382]: time="2024-02-12T19:15:37.248719105Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Feb 12 19:15:37.279610 env[1382]: time="2024-02-12T19:15:37.250207265Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.148-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Feb 12 19:15:37.279610 env[1382]: time="2024-02-12T19:15:37.250238429Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Feb 12 19:15:37.279610 env[1382]: time="2024-02-12T19:15:37.250442857Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 12 19:15:37.279610 env[1382]: time="2024-02-12T19:15:37.250458859Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Feb 12 19:15:37.279610 env[1382]: time="2024-02-12T19:15:37.250470820Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Feb 12 19:15:37.279610 env[1382]: time="2024-02-12T19:15:37.250480262Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Feb 12 19:15:37.279610 env[1382]: time="2024-02-12T19:15:37.250573394Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Feb 12 19:15:37.279610 env[1382]: time="2024-02-12T19:15:37.250770741Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Feb 12 19:15:37.213911 systemd-logind[1367]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Feb 12 19:15:37.281058 env[1382]: time="2024-02-12T19:15:37.250876035Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 12 19:15:37.281058 env[1382]: time="2024-02-12T19:15:37.250890917Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Feb 12 19:15:37.281058 env[1382]: time="2024-02-12T19:15:37.250938083Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Feb 12 19:15:37.281058 env[1382]: time="2024-02-12T19:15:37.250948485Z" level=info msg="metadata content store policy set" policy=shared Feb 12 19:15:37.281058 env[1382]: time="2024-02-12T19:15:37.272716614Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Feb 12 19:15:37.281058 env[1382]: time="2024-02-12T19:15:37.272761060Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Feb 12 19:15:37.281058 env[1382]: time="2024-02-12T19:15:37.272774582Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Feb 12 19:15:37.281058 env[1382]: time="2024-02-12T19:15:37.272815747Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Feb 12 19:15:37.281058 env[1382]: time="2024-02-12T19:15:37.272832190Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Feb 12 19:15:37.281058 env[1382]: time="2024-02-12T19:15:37.272846152Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Feb 12 19:15:37.281058 env[1382]: time="2024-02-12T19:15:37.272859553Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Feb 12 19:15:37.281058 env[1382]: time="2024-02-12T19:15:37.273220402Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Feb 12 19:15:37.281058 env[1382]: time="2024-02-12T19:15:37.273237564Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Feb 12 19:15:37.217797 systemd-logind[1367]: New seat seat0. Feb 12 19:15:37.281795 env[1382]: time="2024-02-12T19:15:37.273250966Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Feb 12 19:15:37.281795 env[1382]: time="2024-02-12T19:15:37.273262808Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Feb 12 19:15:37.281795 env[1382]: time="2024-02-12T19:15:37.273276890Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Feb 12 19:15:37.281795 env[1382]: time="2024-02-12T19:15:37.273780957Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Feb 12 19:15:37.281795 env[1382]: time="2024-02-12T19:15:37.273886652Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Feb 12 19:15:37.281795 env[1382]: time="2024-02-12T19:15:37.274109162Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Feb 12 19:15:37.281795 env[1382]: time="2024-02-12T19:15:37.274134285Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Feb 12 19:15:37.281795 env[1382]: time="2024-02-12T19:15:37.274148767Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Feb 12 19:15:37.281795 env[1382]: time="2024-02-12T19:15:37.274194333Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Feb 12 19:15:37.281795 env[1382]: time="2024-02-12T19:15:37.274207695Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Feb 12 19:15:37.281795 env[1382]: time="2024-02-12T19:15:37.274220016Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Feb 12 19:15:37.281795 env[1382]: time="2024-02-12T19:15:37.274231058Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Feb 12 19:15:37.281795 env[1382]: time="2024-02-12T19:15:37.274242940Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Feb 12 19:15:37.281795 env[1382]: time="2024-02-12T19:15:37.274255261Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Feb 12 19:15:37.282117 env[1382]: time="2024-02-12T19:15:37.274268143Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Feb 12 19:15:37.282117 env[1382]: time="2024-02-12T19:15:37.274280425Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Feb 12 19:15:37.282117 env[1382]: time="2024-02-12T19:15:37.274295547Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Feb 12 19:15:37.282117 env[1382]: time="2024-02-12T19:15:37.274434565Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Feb 12 19:15:37.282117 env[1382]: time="2024-02-12T19:15:37.274452488Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Feb 12 19:15:37.282117 env[1382]: time="2024-02-12T19:15:37.274465529Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Feb 12 19:15:37.282117 env[1382]: time="2024-02-12T19:15:37.274478851Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Feb 12 19:15:37.282117 env[1382]: time="2024-02-12T19:15:37.274509215Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Feb 12 19:15:37.282117 env[1382]: time="2024-02-12T19:15:37.274521737Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Feb 12 19:15:37.282117 env[1382]: time="2024-02-12T19:15:37.274540900Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Feb 12 19:15:37.282117 env[1382]: time="2024-02-12T19:15:37.274574904Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Feb 12 19:15:37.282328 env[1382]: time="2024-02-12T19:15:37.274767330Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Feb 12 19:15:37.282328 env[1382]: time="2024-02-12T19:15:37.274820097Z" level=info msg="Connect containerd service" Feb 12 19:15:37.282328 env[1382]: time="2024-02-12T19:15:37.274854302Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Feb 12 19:15:37.282328 env[1382]: time="2024-02-12T19:15:37.275409296Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 12 19:15:37.282328 env[1382]: time="2024-02-12T19:15:37.275713337Z" level=info msg="Start subscribing containerd event" Feb 12 19:15:37.282328 env[1382]: time="2024-02-12T19:15:37.275756703Z" level=info msg="Start recovering state" Feb 12 19:15:37.282328 env[1382]: time="2024-02-12T19:15:37.275817191Z" level=info msg="Start event monitor" Feb 12 19:15:37.282328 env[1382]: time="2024-02-12T19:15:37.275836474Z" level=info msg="Start snapshots syncer" Feb 12 19:15:37.282328 env[1382]: time="2024-02-12T19:15:37.275844435Z" level=info msg="Start cni network conf syncer for default" Feb 12 19:15:37.282328 env[1382]: time="2024-02-12T19:15:37.275853196Z" level=info msg="Start streaming server" Feb 12 19:15:37.282328 env[1382]: time="2024-02-12T19:15:37.276084267Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Feb 12 19:15:37.282328 env[1382]: time="2024-02-12T19:15:37.276161038Z" level=info msg=serving... address=/run/containerd/containerd.sock Feb 12 19:15:37.322241 env[1382]: time="2024-02-12T19:15:37.289291085Z" level=info msg="containerd successfully booted in 0.091011s" Feb 12 19:15:37.322270 extend-filesystems[1354]: Old size kept for /dev/sda9 Feb 12 19:15:37.322270 extend-filesystems[1354]: Found sr0 Feb 12 19:15:37.289344 systemd[1]: Started containerd.service. Feb 12 19:15:37.308199 systemd[1]: extend-filesystems.service: Deactivated successfully. Feb 12 19:15:37.308367 systemd[1]: Finished extend-filesystems.service. Feb 12 19:15:37.377153 bash[1417]: Updated "/home/core/.ssh/authorized_keys" Feb 12 19:15:37.377860 systemd[1]: Finished update-ssh-keys-after-ignition.service. Feb 12 19:15:37.399549 systemd[1]: nvidia.service: Deactivated successfully. Feb 12 19:15:37.402377 dbus-daemon[1352]: [system] SELinux support is enabled Feb 12 19:15:37.402592 systemd[1]: Started dbus.service. Feb 12 19:15:37.408675 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Feb 12 19:15:37.408697 systemd[1]: Reached target system-config.target. Feb 12 19:15:37.416923 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Feb 12 19:15:37.416943 systemd[1]: Reached target user-config.target. Feb 12 19:15:37.429186 dbus-daemon[1352]: [system] Successfully activated service 'org.freedesktop.systemd1' Feb 12 19:15:37.429481 systemd[1]: Started systemd-logind.service. Feb 12 19:15:37.506532 tar[1381]: linux-arm64/helm Feb 12 19:15:37.508655 tar[1379]: ./ Feb 12 19:15:37.508655 tar[1379]: ./macvlan Feb 12 19:15:37.511476 tar[1380]: crictl Feb 12 19:15:37.568470 tar[1379]: ./static Feb 12 19:15:37.616553 tar[1379]: ./vlan Feb 12 19:15:37.677707 tar[1379]: ./portmap Feb 12 19:15:37.735357 tar[1379]: ./host-local Feb 12 19:15:37.789001 tar[1379]: ./vrf Feb 12 19:15:37.844704 tar[1379]: ./bridge Feb 12 19:15:37.885462 update_engine[1371]: I0212 19:15:37.870889 1371 main.cc:92] Flatcar Update Engine starting Feb 12 19:15:37.912097 tar[1379]: ./tuning Feb 12 19:15:37.954010 systemd[1]: Started update-engine.service. Feb 12 19:15:37.958820 update_engine[1371]: I0212 19:15:37.954046 1371 update_check_scheduler.cc:74] Next update check in 9m18s Feb 12 19:15:37.961976 systemd[1]: Started locksmithd.service. Feb 12 19:15:37.979279 tar[1379]: ./firewall Feb 12 19:15:38.045671 tar[1379]: ./host-device Feb 12 19:15:38.108973 tar[1379]: ./sbr Feb 12 19:15:38.180329 tar[1379]: ./loopback Feb 12 19:15:38.199007 systemd[1]: Finished prepare-critools.service. Feb 12 19:15:38.238103 tar[1379]: ./dhcp Feb 12 19:15:38.242380 tar[1381]: linux-arm64/LICENSE Feb 12 19:15:38.242455 tar[1381]: linux-arm64/README.md Feb 12 19:15:38.246767 systemd[1]: Finished prepare-helm.service. Feb 12 19:15:38.323441 tar[1379]: ./ptp Feb 12 19:15:38.355639 tar[1379]: ./ipvlan Feb 12 19:15:38.387164 tar[1379]: ./bandwidth Feb 12 19:15:38.485690 systemd[1]: Finished prepare-cni-plugins.service. Feb 12 19:15:39.841655 locksmithd[1455]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Feb 12 19:15:39.947066 sshd_keygen[1370]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Feb 12 19:15:39.964119 systemd[1]: Finished sshd-keygen.service. Feb 12 19:15:39.971738 systemd[1]: Starting issuegen.service... Feb 12 19:15:39.976969 systemd[1]: Started waagent.service. Feb 12 19:15:39.982558 systemd[1]: issuegen.service: Deactivated successfully. Feb 12 19:15:39.982737 systemd[1]: Finished issuegen.service. Feb 12 19:15:39.989087 systemd[1]: Starting systemd-user-sessions.service... Feb 12 19:15:39.996179 systemd[1]: Finished systemd-user-sessions.service. Feb 12 19:15:40.003579 systemd[1]: Started getty@tty1.service. Feb 12 19:15:40.010243 systemd[1]: Started serial-getty@ttyAMA0.service. Feb 12 19:15:40.016239 systemd[1]: Reached target getty.target. Feb 12 19:15:40.026354 systemd[1]: Reached target multi-user.target. Feb 12 19:15:40.032908 systemd[1]: Starting systemd-update-utmp-runlevel.service... Feb 12 19:15:40.045606 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Feb 12 19:15:40.045784 systemd[1]: Finished systemd-update-utmp-runlevel.service. Feb 12 19:15:40.052480 systemd[1]: Startup finished in 735ms (kernel) + 45.117s (initrd) + 24.642s (userspace) = 1min 10.495s. Feb 12 19:15:40.684782 login[1478]: pam_lastlog(login:session): file /var/log/lastlog is locked/write Feb 12 19:15:40.685170 login[1477]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Feb 12 19:15:40.739094 systemd[1]: Created slice user-500.slice. Feb 12 19:15:40.740174 systemd[1]: Starting user-runtime-dir@500.service... Feb 12 19:15:40.743360 systemd-logind[1367]: New session 2 of user core. Feb 12 19:15:40.783655 systemd[1]: Finished user-runtime-dir@500.service. Feb 12 19:15:40.785064 systemd[1]: Starting user@500.service... Feb 12 19:15:40.802649 (systemd)[1481]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:15:41.107288 systemd[1481]: Queued start job for default target default.target. Feb 12 19:15:41.107817 systemd[1481]: Reached target paths.target. Feb 12 19:15:41.107838 systemd[1481]: Reached target sockets.target. Feb 12 19:15:41.107849 systemd[1481]: Reached target timers.target. Feb 12 19:15:41.107859 systemd[1481]: Reached target basic.target. Feb 12 19:15:41.107902 systemd[1481]: Reached target default.target. Feb 12 19:15:41.107924 systemd[1481]: Startup finished in 299ms. Feb 12 19:15:41.107969 systemd[1]: Started user@500.service. Feb 12 19:15:41.108885 systemd[1]: Started session-2.scope. Feb 12 19:15:41.686553 login[1478]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Feb 12 19:15:41.690751 systemd[1]: Started session-1.scope. Feb 12 19:15:41.691088 systemd-logind[1367]: New session 1 of user core. Feb 12 19:15:48.168050 waagent[1475]: 2024-02-12T19:15:48.167935Z INFO Daemon Daemon Azure Linux Agent Version:2.6.0.2 Feb 12 19:15:48.192968 waagent[1475]: 2024-02-12T19:15:48.192875Z INFO Daemon Daemon OS: flatcar 3510.3.2 Feb 12 19:15:48.198651 waagent[1475]: 2024-02-12T19:15:48.198574Z INFO Daemon Daemon Python: 3.9.16 Feb 12 19:15:48.206500 waagent[1475]: 2024-02-12T19:15:48.203558Z INFO Daemon Daemon Run daemon Feb 12 19:15:48.208739 waagent[1475]: 2024-02-12T19:15:48.208670Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='3510.3.2' Feb 12 19:15:48.225846 waagent[1475]: 2024-02-12T19:15:48.225697Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 1. Feb 12 19:15:48.242865 waagent[1475]: 2024-02-12T19:15:48.242729Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Feb 12 19:15:48.254076 waagent[1475]: 2024-02-12T19:15:48.253994Z INFO Daemon Daemon cloud-init is enabled: False Feb 12 19:15:48.261277 waagent[1475]: 2024-02-12T19:15:48.261201Z INFO Daemon Daemon Using waagent for provisioning Feb 12 19:15:48.268375 waagent[1475]: 2024-02-12T19:15:48.268299Z INFO Daemon Daemon Activate resource disk Feb 12 19:15:48.278702 waagent[1475]: 2024-02-12T19:15:48.278625Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Feb 12 19:15:48.294591 waagent[1475]: 2024-02-12T19:15:48.294487Z INFO Daemon Daemon Found device: None Feb 12 19:15:48.300114 waagent[1475]: 2024-02-12T19:15:48.300038Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Feb 12 19:15:48.310420 waagent[1475]: 2024-02-12T19:15:48.310343Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Feb 12 19:15:48.325472 waagent[1475]: 2024-02-12T19:15:48.325401Z INFO Daemon Daemon Clean protocol and wireserver endpoint Feb 12 19:15:48.332392 waagent[1475]: 2024-02-12T19:15:48.332323Z INFO Daemon Daemon Running default provisioning handler Feb 12 19:15:48.345958 waagent[1475]: 2024-02-12T19:15:48.345810Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 1. Feb 12 19:15:48.363311 waagent[1475]: 2024-02-12T19:15:48.363155Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Feb 12 19:15:48.374744 waagent[1475]: 2024-02-12T19:15:48.374652Z INFO Daemon Daemon cloud-init is enabled: False Feb 12 19:15:48.380942 waagent[1475]: 2024-02-12T19:15:48.380864Z INFO Daemon Daemon Copying ovf-env.xml Feb 12 19:15:48.469632 waagent[1475]: 2024-02-12T19:15:48.469427Z INFO Daemon Daemon Successfully mounted dvd Feb 12 19:15:48.576259 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Feb 12 19:15:48.595836 waagent[1475]: 2024-02-12T19:15:48.595682Z INFO Daemon Daemon Detect protocol endpoint Feb 12 19:15:48.601683 waagent[1475]: 2024-02-12T19:15:48.601598Z INFO Daemon Daemon Clean protocol and wireserver endpoint Feb 12 19:15:48.608934 waagent[1475]: 2024-02-12T19:15:48.608862Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Feb 12 19:15:48.617067 waagent[1475]: 2024-02-12T19:15:48.616991Z INFO Daemon Daemon Test for route to 168.63.129.16 Feb 12 19:15:48.623779 waagent[1475]: 2024-02-12T19:15:48.623709Z INFO Daemon Daemon Route to 168.63.129.16 exists Feb 12 19:15:48.629920 waagent[1475]: 2024-02-12T19:15:48.629848Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Feb 12 19:15:48.744041 waagent[1475]: 2024-02-12T19:15:48.743971Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Feb 12 19:15:48.752969 waagent[1475]: 2024-02-12T19:15:48.752919Z INFO Daemon Daemon Wire protocol version:2012-11-30 Feb 12 19:15:48.760399 waagent[1475]: 2024-02-12T19:15:48.760313Z INFO Daemon Daemon Server preferred version:2015-04-05 Feb 12 19:15:49.738457 waagent[1475]: 2024-02-12T19:15:49.738312Z INFO Daemon Daemon Initializing goal state during protocol detection Feb 12 19:15:49.755775 waagent[1475]: 2024-02-12T19:15:49.755684Z INFO Daemon Daemon Forcing an update of the goal state.. Feb 12 19:15:49.763681 waagent[1475]: 2024-02-12T19:15:49.763584Z INFO Daemon Daemon Fetching goal state [incarnation 1] Feb 12 19:15:49.843221 waagent[1475]: 2024-02-12T19:15:49.843079Z INFO Daemon Daemon Found private key matching thumbprint 1F1B888EAB51A637BC87F248ED7D7DA5765209F4 Feb 12 19:15:49.853014 waagent[1475]: 2024-02-12T19:15:49.852916Z INFO Daemon Daemon Certificate with thumbprint 14983D481BCF50996264A39966E1460EA993B16A has no matching private key. Feb 12 19:15:49.864013 waagent[1475]: 2024-02-12T19:15:49.863911Z INFO Daemon Daemon Fetch goal state completed Feb 12 19:15:49.930666 waagent[1475]: 2024-02-12T19:15:49.930603Z INFO Daemon Daemon Fetched new vmSettings [correlation ID: 7bc5f459-9072-4a1c-9a41-03732ef74f00 New eTag: 5290496808394297364] Feb 12 19:15:49.944416 waagent[1475]: 2024-02-12T19:15:49.944325Z INFO Daemon Daemon Status Blob type 'None' is not valid, assuming BlockBlob Feb 12 19:15:49.962227 waagent[1475]: 2024-02-12T19:15:49.962142Z INFO Daemon Daemon Starting provisioning Feb 12 19:15:49.968654 waagent[1475]: 2024-02-12T19:15:49.968553Z INFO Daemon Daemon Handle ovf-env.xml. Feb 12 19:15:49.974892 waagent[1475]: 2024-02-12T19:15:49.974802Z INFO Daemon Daemon Set hostname [ci-3510.3.2-a-e08ac1c56f] Feb 12 19:15:50.012470 waagent[1475]: 2024-02-12T19:15:50.012340Z INFO Daemon Daemon Publish hostname [ci-3510.3.2-a-e08ac1c56f] Feb 12 19:15:50.019874 waagent[1475]: 2024-02-12T19:15:50.019773Z INFO Daemon Daemon Examine /proc/net/route for primary interface Feb 12 19:15:50.028721 waagent[1475]: 2024-02-12T19:15:50.028636Z INFO Daemon Daemon Primary interface is [eth0] Feb 12 19:15:50.046285 systemd[1]: systemd-networkd-wait-online.service: Deactivated successfully. Feb 12 19:15:50.046459 systemd[1]: Stopped systemd-networkd-wait-online.service. Feb 12 19:15:50.046536 systemd[1]: Stopping systemd-networkd-wait-online.service... Feb 12 19:15:50.046781 systemd[1]: Stopping systemd-networkd.service... Feb 12 19:15:50.050552 systemd-networkd[1245]: eth0: DHCPv6 lease lost Feb 12 19:15:50.051834 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 12 19:15:50.052002 systemd[1]: Stopped systemd-networkd.service. Feb 12 19:15:50.054104 systemd[1]: Starting systemd-networkd.service... Feb 12 19:15:50.081405 systemd-networkd[1527]: enP51287s1: Link UP Feb 12 19:15:50.081698 systemd-networkd[1527]: enP51287s1: Gained carrier Feb 12 19:15:50.082662 systemd-networkd[1527]: eth0: Link UP Feb 12 19:15:50.082747 systemd-networkd[1527]: eth0: Gained carrier Feb 12 19:15:50.083121 systemd-networkd[1527]: lo: Link UP Feb 12 19:15:50.083185 systemd-networkd[1527]: lo: Gained carrier Feb 12 19:15:50.083485 systemd-networkd[1527]: eth0: Gained IPv6LL Feb 12 19:15:50.083884 systemd-networkd[1527]: Enumeration completed Feb 12 19:15:50.084593 systemd[1]: Started systemd-networkd.service. Feb 12 19:15:50.085802 systemd-networkd[1527]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 12 19:15:50.086396 systemd[1]: Starting systemd-networkd-wait-online.service... Feb 12 19:15:50.089520 waagent[1475]: 2024-02-12T19:15:50.089212Z INFO Daemon Daemon Create user account if not exists Feb 12 19:15:50.097058 waagent[1475]: 2024-02-12T19:15:50.096957Z INFO Daemon Daemon User core already exists, skip useradd Feb 12 19:15:50.104614 waagent[1475]: 2024-02-12T19:15:50.104515Z INFO Daemon Daemon Configure sudoer Feb 12 19:15:50.110013 waagent[1475]: 2024-02-12T19:15:50.109929Z INFO Daemon Daemon Configure sshd Feb 12 19:15:50.115598 systemd-networkd[1527]: eth0: DHCPv4 address 10.200.20.31/24, gateway 10.200.20.1 acquired from 168.63.129.16 Feb 12 19:15:50.115691 waagent[1475]: 2024-02-12T19:15:50.115537Z INFO Daemon Daemon Deploy ssh public key. Feb 12 19:15:50.127551 systemd[1]: Finished systemd-networkd-wait-online.service. Feb 12 19:15:51.376754 waagent[1475]: 2024-02-12T19:15:51.376687Z INFO Daemon Daemon Provisioning complete Feb 12 19:15:51.403717 waagent[1475]: 2024-02-12T19:15:51.403651Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Feb 12 19:15:51.411949 waagent[1475]: 2024-02-12T19:15:51.411867Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Feb 12 19:15:51.425638 waagent[1475]: 2024-02-12T19:15:51.425552Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.6.0.2 is the most current agent Feb 12 19:15:51.726598 waagent[1536]: 2024-02-12T19:15:51.726489Z INFO ExtHandler ExtHandler Agent WALinuxAgent-2.6.0.2 is running as the goal state agent Feb 12 19:15:51.727678 waagent[1536]: 2024-02-12T19:15:51.727623Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Feb 12 19:15:51.727919 waagent[1536]: 2024-02-12T19:15:51.727871Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Feb 12 19:15:51.740344 waagent[1536]: 2024-02-12T19:15:51.740272Z INFO ExtHandler ExtHandler Forcing an update of the goal state.. Feb 12 19:15:51.740660 waagent[1536]: 2024-02-12T19:15:51.740609Z INFO ExtHandler ExtHandler Fetching goal state [incarnation 1] Feb 12 19:15:51.814691 waagent[1536]: 2024-02-12T19:15:51.814555Z INFO ExtHandler ExtHandler Found private key matching thumbprint 1F1B888EAB51A637BC87F248ED7D7DA5765209F4 Feb 12 19:15:51.815055 waagent[1536]: 2024-02-12T19:15:51.815003Z INFO ExtHandler ExtHandler Certificate with thumbprint 14983D481BCF50996264A39966E1460EA993B16A has no matching private key. Feb 12 19:15:51.815374 waagent[1536]: 2024-02-12T19:15:51.815318Z INFO ExtHandler ExtHandler Fetch goal state completed Feb 12 19:15:51.831978 waagent[1536]: 2024-02-12T19:15:51.831922Z INFO ExtHandler ExtHandler Fetched new vmSettings [correlation ID: 43ea677c-9827-44a3-945b-3630de56a693 New eTag: 5290496808394297364] Feb 12 19:15:51.832780 waagent[1536]: 2024-02-12T19:15:51.832723Z INFO ExtHandler ExtHandler Status Blob type 'None' is not valid, assuming BlockBlob Feb 12 19:15:51.914421 waagent[1536]: 2024-02-12T19:15:51.914285Z INFO ExtHandler ExtHandler Distro: flatcar-3510.3.2; OSUtil: CoreOSUtil; AgentService: waagent; Python: 3.9.16; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.20.1; Feb 12 19:15:51.940240 waagent[1536]: 2024-02-12T19:15:51.940140Z INFO ExtHandler ExtHandler WALinuxAgent-2.6.0.2 running as process 1536 Feb 12 19:15:51.944224 waagent[1536]: 2024-02-12T19:15:51.944156Z INFO ExtHandler ExtHandler Cgroup monitoring is not supported on ['flatcar', '3510.3.2', '', 'Flatcar Container Linux by Kinvolk'] Feb 12 19:15:51.945717 waagent[1536]: 2024-02-12T19:15:51.945657Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Feb 12 19:15:52.101838 waagent[1536]: 2024-02-12T19:15:52.101728Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Feb 12 19:15:52.102403 waagent[1536]: 2024-02-12T19:15:52.102347Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Feb 12 19:15:52.110088 waagent[1536]: 2024-02-12T19:15:52.110032Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Feb 12 19:15:52.110796 waagent[1536]: 2024-02-12T19:15:52.110740Z ERROR ExtHandler ExtHandler Unable to setup the persistent firewall rules: [Errno 30] Read-only file system: '/lib/systemd/system/waagent-network-setup.service' Feb 12 19:15:52.112066 waagent[1536]: 2024-02-12T19:15:52.112005Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: configuration enabled [False], cgroups enabled [False], python supported: [True] Feb 12 19:15:52.113568 waagent[1536]: 2024-02-12T19:15:52.113477Z INFO ExtHandler ExtHandler Starting env monitor service. Feb 12 19:15:52.113851 waagent[1536]: 2024-02-12T19:15:52.113781Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Feb 12 19:15:52.114395 waagent[1536]: 2024-02-12T19:15:52.114324Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Feb 12 19:15:52.115013 waagent[1536]: 2024-02-12T19:15:52.114948Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Feb 12 19:15:52.115345 waagent[1536]: 2024-02-12T19:15:52.115286Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Feb 12 19:15:52.115345 waagent[1536]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Feb 12 19:15:52.115345 waagent[1536]: eth0 00000000 0114C80A 0003 0 0 1024 00000000 0 0 0 Feb 12 19:15:52.115345 waagent[1536]: eth0 0014C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Feb 12 19:15:52.115345 waagent[1536]: eth0 0114C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Feb 12 19:15:52.115345 waagent[1536]: eth0 10813FA8 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Feb 12 19:15:52.115345 waagent[1536]: eth0 FEA9FEA9 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Feb 12 19:15:52.117596 waagent[1536]: 2024-02-12T19:15:52.117410Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Feb 12 19:15:52.118170 waagent[1536]: 2024-02-12T19:15:52.118092Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Feb 12 19:15:52.118712 waagent[1536]: 2024-02-12T19:15:52.118639Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Feb 12 19:15:52.119358 waagent[1536]: 2024-02-12T19:15:52.119283Z INFO EnvHandler ExtHandler Configure routes Feb 12 19:15:52.119551 waagent[1536]: 2024-02-12T19:15:52.119472Z INFO EnvHandler ExtHandler Gateway:None Feb 12 19:15:52.119789 waagent[1536]: 2024-02-12T19:15:52.119722Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Feb 12 19:15:52.119876 waagent[1536]: 2024-02-12T19:15:52.119822Z INFO EnvHandler ExtHandler Routes:None Feb 12 19:15:52.120049 waagent[1536]: 2024-02-12T19:15:52.119988Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Feb 12 19:15:52.121560 waagent[1536]: 2024-02-12T19:15:52.121384Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Feb 12 19:15:52.121731 waagent[1536]: 2024-02-12T19:15:52.121664Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Feb 12 19:15:52.122316 waagent[1536]: 2024-02-12T19:15:52.122249Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Feb 12 19:15:52.132763 waagent[1536]: 2024-02-12T19:15:52.132690Z INFO ExtHandler ExtHandler Checking for agent updates (family: Prod) Feb 12 19:15:52.134153 waagent[1536]: 2024-02-12T19:15:52.134099Z WARNING ExtHandler ExtHandler Fetch failed: [HttpError] HTTPS is unavailable and required Feb 12 19:15:52.135995 waagent[1536]: 2024-02-12T19:15:52.135939Z INFO ExtHandler ExtHandler [PERIODIC] Request failed using the direct channel. Error: 'NoneType' object has no attribute 'getheaders' Feb 12 19:15:52.159358 waagent[1536]: 2024-02-12T19:15:52.159213Z ERROR EnvHandler ExtHandler Failed to get the PID of the DHCP client: invalid literal for int() with base 10: 'MainPID=1527' Feb 12 19:15:52.187272 waagent[1536]: 2024-02-12T19:15:52.187206Z INFO ExtHandler ExtHandler Default channel changed to HostGA channel. Feb 12 19:15:52.289282 waagent[1536]: 2024-02-12T19:15:52.289217Z INFO ExtHandler ExtHandler Agent WALinuxAgent-2.6.0.2 discovered update WALinuxAgent-2.9.1.1 -- exiting Feb 12 19:15:52.428867 waagent[1475]: 2024-02-12T19:15:52.428673Z INFO Daemon Daemon Agent WALinuxAgent-2.6.0.2 launched with command '/usr/share/oem/python/bin/python -u /usr/share/oem/bin/waagent -run-exthandlers' is successfully running Feb 12 19:15:52.432811 waagent[1475]: 2024-02-12T19:15:52.432747Z INFO Daemon Daemon Determined Agent WALinuxAgent-2.9.1.1 to be the latest agent Feb 12 19:15:53.614036 waagent[1563]: 2024-02-12T19:15:53.613926Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.9.1.1) Feb 12 19:15:53.614758 waagent[1563]: 2024-02-12T19:15:53.614697Z INFO ExtHandler ExtHandler OS: flatcar 3510.3.2 Feb 12 19:15:53.614895 waagent[1563]: 2024-02-12T19:15:53.614847Z INFO ExtHandler ExtHandler Python: 3.9.16 Feb 12 19:15:53.623027 waagent[1563]: 2024-02-12T19:15:53.622884Z INFO ExtHandler ExtHandler Distro: flatcar-3510.3.2; OSUtil: CoreOSUtil; AgentService: waagent; Python: 3.9.16; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.20.1; Feb 12 19:15:53.623484 waagent[1563]: 2024-02-12T19:15:53.623426Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Feb 12 19:15:53.623672 waagent[1563]: 2024-02-12T19:15:53.623619Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Feb 12 19:15:53.637012 waagent[1563]: 2024-02-12T19:15:53.636923Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Feb 12 19:15:53.664921 waagent[1563]: 2024-02-12T19:15:53.664861Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.143 Feb 12 19:15:53.666019 waagent[1563]: 2024-02-12T19:15:53.665959Z INFO ExtHandler Feb 12 19:15:53.666167 waagent[1563]: 2024-02-12T19:15:53.666120Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: 598ee372-23e1-4a33-9924-215419a55e44 eTag: 5290496808394297364 source: Fabric] Feb 12 19:15:53.666919 waagent[1563]: 2024-02-12T19:15:53.666861Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Feb 12 19:15:53.668136 waagent[1563]: 2024-02-12T19:15:53.668073Z INFO ExtHandler Feb 12 19:15:53.668265 waagent[1563]: 2024-02-12T19:15:53.668219Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Feb 12 19:15:53.675044 waagent[1563]: 2024-02-12T19:15:53.674995Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Feb 12 19:15:53.675549 waagent[1563]: 2024-02-12T19:15:53.675481Z WARNING ExtHandler ExtHandler Fetch failed: [HttpError] HTTPS is unavailable and required Feb 12 19:15:53.698822 waagent[1563]: 2024-02-12T19:15:53.698758Z INFO ExtHandler ExtHandler Default channel changed to HostGAPlugin channel. Feb 12 19:15:53.773537 waagent[1563]: 2024-02-12T19:15:53.773363Z INFO ExtHandler Downloaded certificate {'thumbprint': '1F1B888EAB51A637BC87F248ED7D7DA5765209F4', 'hasPrivateKey': True} Feb 12 19:15:53.774723 waagent[1563]: 2024-02-12T19:15:53.774658Z INFO ExtHandler Downloaded certificate {'thumbprint': '14983D481BCF50996264A39966E1460EA993B16A', 'hasPrivateKey': False} Feb 12 19:15:53.775867 waagent[1563]: 2024-02-12T19:15:53.775802Z INFO ExtHandler Fetch goal state completed Feb 12 19:15:53.803609 waagent[1563]: 2024-02-12T19:15:53.803532Z INFO ExtHandler ExtHandler WALinuxAgent-2.9.1.1 running as process 1563 Feb 12 19:15:53.807178 waagent[1563]: 2024-02-12T19:15:53.807108Z INFO ExtHandler ExtHandler Cgroup monitoring is not supported on ['flatcar', '3510.3.2', '', 'Flatcar Container Linux by Kinvolk'] Feb 12 19:15:53.808698 waagent[1563]: 2024-02-12T19:15:53.808640Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Feb 12 19:15:53.813473 waagent[1563]: 2024-02-12T19:15:53.813408Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Feb 12 19:15:53.813905 waagent[1563]: 2024-02-12T19:15:53.813842Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Feb 12 19:15:53.821529 waagent[1563]: 2024-02-12T19:15:53.821440Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Feb 12 19:15:53.822067 waagent[1563]: 2024-02-12T19:15:53.822006Z ERROR ExtHandler ExtHandler Unable to setup the persistent firewall rules: [Errno 30] Read-only file system: '/lib/systemd/system/waagent-network-setup.service' Feb 12 19:15:53.828211 waagent[1563]: 2024-02-12T19:15:53.828094Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up. Feb 12 19:15:53.831904 waagent[1563]: 2024-02-12T19:15:53.831843Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: configuration enabled [True], cgroups enabled [False], python supported: [True] Feb 12 19:15:53.833473 waagent[1563]: 2024-02-12T19:15:53.833400Z INFO ExtHandler ExtHandler Starting env monitor service. Feb 12 19:15:53.834206 waagent[1563]: 2024-02-12T19:15:53.834138Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Feb 12 19:15:53.834739 waagent[1563]: 2024-02-12T19:15:53.834678Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Feb 12 19:15:53.835026 waagent[1563]: 2024-02-12T19:15:53.834972Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Feb 12 19:15:53.835262 waagent[1563]: 2024-02-12T19:15:53.835214Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Feb 12 19:15:53.835472 waagent[1563]: 2024-02-12T19:15:53.835427Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Feb 12 19:15:53.835674 waagent[1563]: 2024-02-12T19:15:53.835602Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Feb 12 19:15:53.836036 waagent[1563]: 2024-02-12T19:15:53.835966Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Feb 12 19:15:53.837402 waagent[1563]: 2024-02-12T19:15:53.837323Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Feb 12 19:15:53.837764 waagent[1563]: 2024-02-12T19:15:53.837691Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Feb 12 19:15:53.838129 waagent[1563]: 2024-02-12T19:15:53.838070Z INFO EnvHandler ExtHandler Configure routes Feb 12 19:15:53.838708 waagent[1563]: 2024-02-12T19:15:53.838633Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Feb 12 19:15:53.839476 waagent[1563]: 2024-02-12T19:15:53.839414Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Feb 12 19:15:53.839724 waagent[1563]: 2024-02-12T19:15:53.839658Z INFO EnvHandler ExtHandler Gateway:None Feb 12 19:15:53.840035 waagent[1563]: 2024-02-12T19:15:53.839970Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Feb 12 19:15:53.840035 waagent[1563]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Feb 12 19:15:53.840035 waagent[1563]: eth0 00000000 0114C80A 0003 0 0 1024 00000000 0 0 0 Feb 12 19:15:53.840035 waagent[1563]: eth0 0014C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Feb 12 19:15:53.840035 waagent[1563]: eth0 0114C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Feb 12 19:15:53.840035 waagent[1563]: eth0 10813FA8 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Feb 12 19:15:53.840035 waagent[1563]: eth0 FEA9FEA9 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Feb 12 19:15:53.841079 waagent[1563]: 2024-02-12T19:15:53.841017Z INFO EnvHandler ExtHandler Routes:None Feb 12 19:15:53.877583 waagent[1563]: 2024-02-12T19:15:53.877438Z INFO MonitorHandler ExtHandler Network interfaces: Feb 12 19:15:53.877583 waagent[1563]: Executing ['ip', '-a', '-o', 'link']: Feb 12 19:15:53.877583 waagent[1563]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Feb 12 19:15:53.877583 waagent[1563]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 00:22:48:7e:29:35 brd ff:ff:ff:ff:ff:ff Feb 12 19:15:53.877583 waagent[1563]: 3: enP51287s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 00:22:48:7e:29:35 brd ff:ff:ff:ff:ff:ff\ altname enP51287p0s2 Feb 12 19:15:53.877583 waagent[1563]: Executing ['ip', '-4', '-a', '-o', 'address']: Feb 12 19:15:53.877583 waagent[1563]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Feb 12 19:15:53.877583 waagent[1563]: 2: eth0 inet 10.200.20.31/24 metric 1024 brd 10.200.20.255 scope global eth0\ valid_lft forever preferred_lft forever Feb 12 19:15:53.877583 waagent[1563]: Executing ['ip', '-6', '-a', '-o', 'address']: Feb 12 19:15:53.877583 waagent[1563]: 1: lo inet6 ::1/128 scope host \ valid_lft forever preferred_lft forever Feb 12 19:15:53.877583 waagent[1563]: 2: eth0 inet6 fe80::222:48ff:fe7e:2935/64 scope link \ valid_lft forever preferred_lft forever Feb 12 19:15:53.882265 waagent[1563]: 2024-02-12T19:15:53.882177Z INFO ExtHandler ExtHandler No requested version specified, checking for all versions for agent update (family: Prod) Feb 12 19:15:53.884194 waagent[1563]: 2024-02-12T19:15:53.884121Z INFO ExtHandler ExtHandler Downloading manifest Feb 12 19:15:53.902729 waagent[1563]: 2024-02-12T19:15:53.902671Z INFO ExtHandler ExtHandler Feb 12 19:15:53.903039 waagent[1563]: 2024-02-12T19:15:53.902984Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: 0b7e3dc1-e643-480d-96be-c08217277cb3 correlation 58d628b1-347f-42a4-8060-de3135887021 created: 2024-02-12T19:13:39.335747Z] Feb 12 19:15:53.904088 waagent[1563]: 2024-02-12T19:15:53.904029Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Feb 12 19:15:53.906040 waagent[1563]: 2024-02-12T19:15:53.905984Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 3 ms] Feb 12 19:15:53.933177 waagent[1563]: 2024-02-12T19:15:53.933087Z INFO ExtHandler ExtHandler Looking for existing remote access users. Feb 12 19:15:53.953311 waagent[1563]: 2024-02-12T19:15:53.953238Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.9.1.1 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: E7B88E30-25E5-4288-96FD-675799DE18E5;DroppedPackets: 0;UpdateGSErrors: 0;AutoUpdate: 1] Feb 12 19:15:54.101484 waagent[1563]: 2024-02-12T19:15:54.101352Z INFO EnvHandler ExtHandler Successfully added Azure fabric firewall rules. Current Firewall rules: Feb 12 19:15:54.101484 waagent[1563]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Feb 12 19:15:54.101484 waagent[1563]: pkts bytes target prot opt in out source destination Feb 12 19:15:54.101484 waagent[1563]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Feb 12 19:15:54.101484 waagent[1563]: pkts bytes target prot opt in out source destination Feb 12 19:15:54.101484 waagent[1563]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Feb 12 19:15:54.101484 waagent[1563]: pkts bytes target prot opt in out source destination Feb 12 19:15:54.101484 waagent[1563]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Feb 12 19:15:54.101484 waagent[1563]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Feb 12 19:15:54.101484 waagent[1563]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Feb 12 19:15:54.110369 waagent[1563]: 2024-02-12T19:15:54.110239Z INFO EnvHandler ExtHandler Current Firewall rules: Feb 12 19:15:54.110369 waagent[1563]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Feb 12 19:15:54.110369 waagent[1563]: pkts bytes target prot opt in out source destination Feb 12 19:15:54.110369 waagent[1563]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Feb 12 19:15:54.110369 waagent[1563]: pkts bytes target prot opt in out source destination Feb 12 19:15:54.110369 waagent[1563]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Feb 12 19:15:54.110369 waagent[1563]: pkts bytes target prot opt in out source destination Feb 12 19:15:54.110369 waagent[1563]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Feb 12 19:15:54.110369 waagent[1563]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Feb 12 19:15:54.110369 waagent[1563]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Feb 12 19:15:54.111267 waagent[1563]: 2024-02-12T19:15:54.111219Z INFO EnvHandler ExtHandler Set block dev timeout: sda with timeout: 300 Feb 12 19:16:15.496688 kernel: hv_balloon: Max. dynamic memory size: 4096 MB Feb 12 19:16:23.590337 update_engine[1371]: I0212 19:16:23.590278 1371 update_attempter.cc:509] Updating boot flags... Feb 12 19:16:51.581485 systemd[1]: Created slice system-sshd.slice. Feb 12 19:16:51.582625 systemd[1]: Started sshd@0-10.200.20.31:22-10.200.12.6:59848.service. Feb 12 19:16:52.201097 sshd[1682]: Accepted publickey for core from 10.200.12.6 port 59848 ssh2: RSA SHA256:SfUjs7MZHb4dm/nWS32sXF6T7NXqXtYp+5K7LVbLt6U Feb 12 19:16:52.240189 sshd[1682]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:16:52.244775 systemd[1]: Started session-3.scope. Feb 12 19:16:52.245583 systemd-logind[1367]: New session 3 of user core. Feb 12 19:16:52.560374 systemd[1]: Started sshd@1-10.200.20.31:22-10.200.12.6:59852.service. Feb 12 19:16:52.994791 sshd[1687]: Accepted publickey for core from 10.200.12.6 port 59852 ssh2: RSA SHA256:SfUjs7MZHb4dm/nWS32sXF6T7NXqXtYp+5K7LVbLt6U Feb 12 19:16:52.996045 sshd[1687]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:16:52.999788 systemd-logind[1367]: New session 4 of user core. Feb 12 19:16:53.000197 systemd[1]: Started session-4.scope. Feb 12 19:16:53.309333 sshd[1687]: pam_unix(sshd:session): session closed for user core Feb 12 19:16:53.311625 systemd[1]: sshd@1-10.200.20.31:22-10.200.12.6:59852.service: Deactivated successfully. Feb 12 19:16:53.312307 systemd[1]: session-4.scope: Deactivated successfully. Feb 12 19:16:53.312820 systemd-logind[1367]: Session 4 logged out. Waiting for processes to exit. Feb 12 19:16:53.314260 systemd-logind[1367]: Removed session 4. Feb 12 19:16:53.377641 systemd[1]: Started sshd@2-10.200.20.31:22-10.200.12.6:59858.service. Feb 12 19:16:53.773094 sshd[1693]: Accepted publickey for core from 10.200.12.6 port 59858 ssh2: RSA SHA256:SfUjs7MZHb4dm/nWS32sXF6T7NXqXtYp+5K7LVbLt6U Feb 12 19:16:53.774330 sshd[1693]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:16:53.778133 systemd-logind[1367]: New session 5 of user core. Feb 12 19:16:53.778590 systemd[1]: Started session-5.scope. Feb 12 19:16:54.060547 sshd[1693]: pam_unix(sshd:session): session closed for user core Feb 12 19:16:54.063384 systemd-logind[1367]: Session 5 logged out. Waiting for processes to exit. Feb 12 19:16:54.063471 systemd[1]: session-5.scope: Deactivated successfully. Feb 12 19:16:54.064473 systemd[1]: sshd@2-10.200.20.31:22-10.200.12.6:59858.service: Deactivated successfully. Feb 12 19:16:54.065407 systemd-logind[1367]: Removed session 5. Feb 12 19:16:54.127448 systemd[1]: Started sshd@3-10.200.20.31:22-10.200.12.6:59860.service. Feb 12 19:16:54.525374 sshd[1699]: Accepted publickey for core from 10.200.12.6 port 59860 ssh2: RSA SHA256:SfUjs7MZHb4dm/nWS32sXF6T7NXqXtYp+5K7LVbLt6U Feb 12 19:16:54.526950 sshd[1699]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:16:54.530968 systemd[1]: Started session-6.scope. Feb 12 19:16:54.531405 systemd-logind[1367]: New session 6 of user core. Feb 12 19:16:54.819061 sshd[1699]: pam_unix(sshd:session): session closed for user core Feb 12 19:16:54.821707 systemd[1]: sshd@3-10.200.20.31:22-10.200.12.6:59860.service: Deactivated successfully. Feb 12 19:16:54.822336 systemd[1]: session-6.scope: Deactivated successfully. Feb 12 19:16:54.823084 systemd-logind[1367]: Session 6 logged out. Waiting for processes to exit. Feb 12 19:16:54.823745 systemd-logind[1367]: Removed session 6. Feb 12 19:16:54.893741 systemd[1]: Started sshd@4-10.200.20.31:22-10.200.12.6:59870.service. Feb 12 19:16:55.322671 sshd[1705]: Accepted publickey for core from 10.200.12.6 port 59870 ssh2: RSA SHA256:SfUjs7MZHb4dm/nWS32sXF6T7NXqXtYp+5K7LVbLt6U Feb 12 19:16:55.323930 sshd[1705]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:16:55.327727 systemd-logind[1367]: New session 7 of user core. Feb 12 19:16:55.328116 systemd[1]: Started session-7.scope. Feb 12 19:16:55.847914 sudo[1708]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Feb 12 19:16:55.848110 sudo[1708]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Feb 12 19:16:56.559969 systemd[1]: Starting docker.service... Feb 12 19:16:56.608859 env[1723]: time="2024-02-12T19:16:56.608806711Z" level=info msg="Starting up" Feb 12 19:16:56.610178 env[1723]: time="2024-02-12T19:16:56.610155548Z" level=info msg="parsed scheme: \"unix\"" module=grpc Feb 12 19:16:56.610273 env[1723]: time="2024-02-12T19:16:56.610260388Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Feb 12 19:16:56.610345 env[1723]: time="2024-02-12T19:16:56.610330867Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Feb 12 19:16:56.610403 env[1723]: time="2024-02-12T19:16:56.610390627Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Feb 12 19:16:56.611861 env[1723]: time="2024-02-12T19:16:56.611838864Z" level=info msg="parsed scheme: \"unix\"" module=grpc Feb 12 19:16:56.611958 env[1723]: time="2024-02-12T19:16:56.611945584Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Feb 12 19:16:56.612019 env[1723]: time="2024-02-12T19:16:56.612005704Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Feb 12 19:16:56.612071 env[1723]: time="2024-02-12T19:16:56.612059424Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Feb 12 19:16:56.720547 env[1723]: time="2024-02-12T19:16:56.720510639Z" level=info msg="Loading containers: start." Feb 12 19:16:56.870560 kernel: Initializing XFRM netlink socket Feb 12 19:16:56.896046 env[1723]: time="2024-02-12T19:16:56.896012195Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Feb 12 19:16:57.027090 systemd-networkd[1527]: docker0: Link UP Feb 12 19:16:57.047543 env[1723]: time="2024-02-12T19:16:57.047478683Z" level=info msg="Loading containers: done." Feb 12 19:16:57.055949 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1966365274-merged.mount: Deactivated successfully. Feb 12 19:16:57.070661 env[1723]: time="2024-02-12T19:16:57.070621837Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Feb 12 19:16:57.070832 env[1723]: time="2024-02-12T19:16:57.070808916Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 Feb 12 19:16:57.070930 env[1723]: time="2024-02-12T19:16:57.070911516Z" level=info msg="Daemon has completed initialization" Feb 12 19:16:57.126625 systemd[1]: Started docker.service. Feb 12 19:16:57.132609 env[1723]: time="2024-02-12T19:16:57.132549991Z" level=info msg="API listen on /run/docker.sock" Feb 12 19:16:57.154151 systemd[1]: Reloading. Feb 12 19:16:57.232202 /usr/lib/systemd/system-generators/torcx-generator[1856]: time="2024-02-12T19:16:57Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 12 19:16:57.235539 /usr/lib/systemd/system-generators/torcx-generator[1856]: time="2024-02-12T19:16:57Z" level=info msg="torcx already run" Feb 12 19:16:57.298399 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 12 19:16:57.298418 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 12 19:16:57.315383 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 12 19:16:57.402878 systemd[1]: Started kubelet.service. Feb 12 19:16:57.461334 kubelet[1912]: E0212 19:16:57.461248 1912 run.go:74] "command failed" err="failed to validate kubelet flags: the container runtime endpoint address was not specified or empty, use --container-runtime-endpoint to set" Feb 12 19:16:57.463602 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 12 19:16:57.463730 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 12 19:17:01.594477 env[1382]: time="2024-02-12T19:17:01.594431461Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.26.13\"" Feb 12 19:17:02.577304 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2914574696.mount: Deactivated successfully. Feb 12 19:17:04.364776 env[1382]: time="2024-02-12T19:17:04.364720012Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:17:04.377020 env[1382]: time="2024-02-12T19:17:04.376972432Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:d88fbf485621d26e515136c1848b666d7dfe0fa84ca7ebd826447b039d306d88,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:17:04.382688 env[1382]: time="2024-02-12T19:17:04.382647942Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:17:04.388749 env[1382]: time="2024-02-12T19:17:04.388701572Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:2f28bed4096abd572a56595ac0304238bdc271dcfe22c650707c09bf97ec16fd,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:17:04.389352 env[1382]: time="2024-02-12T19:17:04.389325651Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.26.13\" returns image reference \"sha256:d88fbf485621d26e515136c1848b666d7dfe0fa84ca7ebd826447b039d306d88\"" Feb 12 19:17:04.398720 env[1382]: time="2024-02-12T19:17:04.398687995Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.26.13\"" Feb 12 19:17:06.108401 env[1382]: time="2024-02-12T19:17:06.108346538Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:17:06.114648 env[1382]: time="2024-02-12T19:17:06.114609447Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:71d8e883014e0849ca9a3161bd1feac09ad210dea2f4140732e218f04a6826c2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:17:06.125714 env[1382]: time="2024-02-12T19:17:06.125671310Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:17:06.132220 env[1382]: time="2024-02-12T19:17:06.132172499Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:fda420c6c15cdd01c4eba3404f0662fe486a9c7f38fa13c741a21334673841a2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:17:06.132964 env[1382]: time="2024-02-12T19:17:06.132932818Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.26.13\" returns image reference \"sha256:71d8e883014e0849ca9a3161bd1feac09ad210dea2f4140732e218f04a6826c2\"" Feb 12 19:17:06.141570 env[1382]: time="2024-02-12T19:17:06.141527764Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.26.13\"" Feb 12 19:17:07.403368 env[1382]: time="2024-02-12T19:17:07.403315455Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:17:07.409363 env[1382]: time="2024-02-12T19:17:07.409313565Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:a636f3d6300bad4775ea80ad544e38f486a039732c4871bddc1db3a5336c871a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:17:07.413832 env[1382]: time="2024-02-12T19:17:07.413801158Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:17:07.418018 env[1382]: time="2024-02-12T19:17:07.417988872Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:c3c7303ee6d01c8e5a769db28661cf854b55175aa72c67e9b6a7b9d47ac42af3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:17:07.418786 env[1382]: time="2024-02-12T19:17:07.418758150Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.26.13\" returns image reference \"sha256:a636f3d6300bad4775ea80ad544e38f486a039732c4871bddc1db3a5336c871a\"" Feb 12 19:17:07.427001 env[1382]: time="2024-02-12T19:17:07.426937137Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.26.13\"" Feb 12 19:17:07.475037 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Feb 12 19:17:07.475203 systemd[1]: Stopped kubelet.service. Feb 12 19:17:07.476657 systemd[1]: Started kubelet.service. Feb 12 19:17:07.527406 kubelet[1950]: E0212 19:17:07.527337 1950 run.go:74] "command failed" err="failed to validate kubelet flags: the container runtime endpoint address was not specified or empty, use --container-runtime-endpoint to set" Feb 12 19:17:07.530710 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 12 19:17:07.530829 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 12 19:17:08.601006 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount796004887.mount: Deactivated successfully. Feb 12 19:17:09.413964 env[1382]: time="2024-02-12T19:17:09.413906700Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:17:09.420683 env[1382]: time="2024-02-12T19:17:09.420639610Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:95874282cd4f2ad9bc384735e604f0380cff88d61a2ca9db65890e6d9df46926,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:17:09.425349 env[1382]: time="2024-02-12T19:17:09.425302963Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:17:09.431661 env[1382]: time="2024-02-12T19:17:09.431627874Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:f6e0de32a002b910b9b2e0e8d769e2d7b05208240559c745ce4781082ab15f22,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:17:09.432066 env[1382]: time="2024-02-12T19:17:09.432035073Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.26.13\" returns image reference \"sha256:95874282cd4f2ad9bc384735e604f0380cff88d61a2ca9db65890e6d9df46926\"" Feb 12 19:17:09.440631 env[1382]: time="2024-02-12T19:17:09.440589860Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Feb 12 19:17:10.113815 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1515765734.mount: Deactivated successfully. Feb 12 19:17:10.151107 env[1382]: time="2024-02-12T19:17:10.151064752Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:17:10.166618 env[1382]: time="2024-02-12T19:17:10.166558689Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:17:10.177368 env[1382]: time="2024-02-12T19:17:10.177321473Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:17:10.187757 env[1382]: time="2024-02-12T19:17:10.187720338Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:17:10.188263 env[1382]: time="2024-02-12T19:17:10.188234657Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\"" Feb 12 19:17:10.197155 env[1382]: time="2024-02-12T19:17:10.197108124Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.6-0\"" Feb 12 19:17:11.152587 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount860127913.mount: Deactivated successfully. Feb 12 19:17:14.306337 env[1382]: time="2024-02-12T19:17:14.306291257Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.6-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:17:14.319957 env[1382]: time="2024-02-12T19:17:14.319899519Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ef245802824036d4a23ba6f8b3f04c055416f9dc73a54d546b1f98ad16f6b8cb,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:17:14.324455 env[1382]: time="2024-02-12T19:17:14.324413793Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.6-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:17:14.330756 env[1382]: time="2024-02-12T19:17:14.330717624Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:dd75ec974b0a2a6f6bb47001ba09207976e625db898d1b16735528c009cb171c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:17:14.331513 env[1382]: time="2024-02-12T19:17:14.331475103Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.6-0\" returns image reference \"sha256:ef245802824036d4a23ba6f8b3f04c055416f9dc73a54d546b1f98ad16f6b8cb\"" Feb 12 19:17:14.341412 env[1382]: time="2024-02-12T19:17:14.341370290Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.9.3\"" Feb 12 19:17:15.120457 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4173110903.mount: Deactivated successfully. Feb 12 19:17:15.572277 env[1382]: time="2024-02-12T19:17:15.572232168Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.9.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:17:15.579206 env[1382]: time="2024-02-12T19:17:15.579160358Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:b19406328e70dd2f6a36d6dbe4e867b0684ced2fdeb2f02ecb54ead39ec0bac0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:17:15.585049 env[1382]: time="2024-02-12T19:17:15.585015391Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.9.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:17:15.590036 env[1382]: time="2024-02-12T19:17:15.590002784Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:8e352a029d304ca7431c6507b56800636c321cb52289686a581ab70aaa8a2e2a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:17:15.590632 env[1382]: time="2024-02-12T19:17:15.590603343Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.9.3\" returns image reference \"sha256:b19406328e70dd2f6a36d6dbe4e867b0684ced2fdeb2f02ecb54ead39ec0bac0\"" Feb 12 19:17:17.724987 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Feb 12 19:17:17.725163 systemd[1]: Stopped kubelet.service. Feb 12 19:17:17.726639 systemd[1]: Started kubelet.service. Feb 12 19:17:17.775297 kubelet[2023]: E0212 19:17:17.775250 2023 run.go:74] "command failed" err="failed to validate kubelet flags: the container runtime endpoint address was not specified or empty, use --container-runtime-endpoint to set" Feb 12 19:17:17.777161 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 12 19:17:17.777288 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 12 19:17:21.024517 systemd[1]: Stopped kubelet.service. Feb 12 19:17:21.039403 systemd[1]: Reloading. Feb 12 19:17:21.099571 /usr/lib/systemd/system-generators/torcx-generator[2055]: time="2024-02-12T19:17:21Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 12 19:17:21.101607 /usr/lib/systemd/system-generators/torcx-generator[2055]: time="2024-02-12T19:17:21Z" level=info msg="torcx already run" Feb 12 19:17:21.170604 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 12 19:17:21.170622 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 12 19:17:21.187567 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 12 19:17:21.277617 systemd[1]: Started kubelet.service. Feb 12 19:17:21.326656 kubelet[2114]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI. Feb 12 19:17:21.327001 kubelet[2114]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 12 19:17:21.327139 kubelet[2114]: I0212 19:17:21.327107 2114 server.go:198] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 12 19:17:21.328464 kubelet[2114]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI. Feb 12 19:17:21.328588 kubelet[2114]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 12 19:17:22.179060 kubelet[2114]: I0212 19:17:22.179025 2114 server.go:412] "Kubelet version" kubeletVersion="v1.26.5" Feb 12 19:17:22.179060 kubelet[2114]: I0212 19:17:22.179054 2114 server.go:414] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 12 19:17:22.179269 kubelet[2114]: I0212 19:17:22.179250 2114 server.go:836] "Client rotation is on, will bootstrap in background" Feb 12 19:17:22.182246 kubelet[2114]: E0212 19:17:22.182223 2114 certificate_manager.go:471] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.200.20.31:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.200.20.31:6443: connect: connection refused Feb 12 19:17:22.182384 kubelet[2114]: I0212 19:17:22.182372 2114 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 12 19:17:22.183767 kubelet[2114]: W0212 19:17:22.183749 2114 machine.go:65] Cannot read vendor id correctly, set empty. Feb 12 19:17:22.184235 kubelet[2114]: I0212 19:17:22.184220 2114 server.go:659] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 12 19:17:22.184429 kubelet[2114]: I0212 19:17:22.184416 2114 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 12 19:17:22.184511 kubelet[2114]: I0212 19:17:22.184479 2114 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: KubeletOOMScoreAdj:-999 ContainerRuntime: CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:systemd KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:memory.available Operator:LessThan Value:{Quantity:100Mi Percentage:0} GracePeriod:0s MinReclaim:} {Signal:nodefs.available Operator:LessThan Value:{Quantity: Percentage:0.1} GracePeriod:0s MinReclaim:} {Signal:nodefs.inodesFree Operator:LessThan Value:{Quantity: Percentage:0.05} GracePeriod:0s MinReclaim:} {Signal:imagefs.available Operator:LessThan Value:{Quantity: Percentage:0.15} GracePeriod:0s MinReclaim:}]} QOSReserved:map[] CPUManagerPolicy:none CPUManagerPolicyOptions:map[] ExperimentalTopologyManagerScope:container CPUManagerReconcilePeriod:10s ExperimentalMemoryManagerPolicy:None ExperimentalMemoryManagerReservedMemory:[] ExperimentalPodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms ExperimentalTopologyManagerPolicy:none ExperimentalTopologyManagerPolicyOptions:map[]} Feb 12 19:17:22.184609 kubelet[2114]: I0212 19:17:22.184529 2114 topology_manager.go:134] "Creating topology manager with policy per scope" topologyPolicyName="none" topologyScopeName="container" Feb 12 19:17:22.184609 kubelet[2114]: I0212 19:17:22.184542 2114 container_manager_linux.go:308] "Creating device plugin manager" Feb 12 19:17:22.184657 kubelet[2114]: I0212 19:17:22.184638 2114 state_mem.go:36] "Initialized new in-memory state store" Feb 12 19:17:22.187402 kubelet[2114]: I0212 19:17:22.187386 2114 kubelet.go:398] "Attempting to sync node with API server" Feb 12 19:17:22.187530 kubelet[2114]: I0212 19:17:22.187519 2114 kubelet.go:286] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 12 19:17:22.187617 kubelet[2114]: I0212 19:17:22.187607 2114 kubelet.go:297] "Adding apiserver pod source" Feb 12 19:17:22.187671 kubelet[2114]: I0212 19:17:22.187662 2114 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 12 19:17:22.189808 kubelet[2114]: W0212 19:17:22.189643 2114 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://10.200.20.31:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.2-a-e08ac1c56f&limit=500&resourceVersion=0": dial tcp 10.200.20.31:6443: connect: connection refused Feb 12 19:17:22.189808 kubelet[2114]: E0212 19:17:22.189693 2114 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.200.20.31:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.2-a-e08ac1c56f&limit=500&resourceVersion=0": dial tcp 10.200.20.31:6443: connect: connection refused Feb 12 19:17:22.190115 kubelet[2114]: W0212 19:17:22.190067 2114 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://10.200.20.31:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.20.31:6443: connect: connection refused Feb 12 19:17:22.190115 kubelet[2114]: E0212 19:17:22.190109 2114 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.200.20.31:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.20.31:6443: connect: connection refused Feb 12 19:17:22.190196 kubelet[2114]: I0212 19:17:22.190165 2114 kuberuntime_manager.go:244] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Feb 12 19:17:22.190432 kubelet[2114]: W0212 19:17:22.190413 2114 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Feb 12 19:17:22.190805 kubelet[2114]: I0212 19:17:22.190779 2114 server.go:1186] "Started kubelet" Feb 12 19:17:22.204320 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Feb 12 19:17:22.205237 kubelet[2114]: E0212 19:17:22.204860 2114 event.go:276] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ci-3510.3.2-a-e08ac1c56f.17b3339f1d77cf34", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ci-3510.3.2-a-e08ac1c56f", UID:"ci-3510.3.2-a-e08ac1c56f", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"ci-3510.3.2-a-e08ac1c56f"}, FirstTimestamp:time.Date(2024, time.February, 12, 19, 17, 22, 190757684, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 19, 17, 22, 190757684, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'Post "https://10.200.20.31:6443/api/v1/namespaces/default/events": dial tcp 10.200.20.31:6443: connect: connection refused'(may retry after sleeping) Feb 12 19:17:22.205866 kubelet[2114]: I0212 19:17:22.205846 2114 server.go:161] "Starting to listen" address="0.0.0.0" port=10250 Feb 12 19:17:22.206759 kubelet[2114]: I0212 19:17:22.206738 2114 server.go:451] "Adding debug handlers to kubelet server" Feb 12 19:17:22.207964 kubelet[2114]: E0212 19:17:22.205949 2114 cri_stats_provider.go:455] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Feb 12 19:17:22.207964 kubelet[2114]: E0212 19:17:22.207310 2114 kubelet.go:1386] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 12 19:17:22.207964 kubelet[2114]: I0212 19:17:22.206096 2114 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 12 19:17:22.209599 kubelet[2114]: E0212 19:17:22.209573 2114 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"ci-3510.3.2-a-e08ac1c56f\" not found" Feb 12 19:17:22.209757 kubelet[2114]: I0212 19:17:22.209735 2114 volume_manager.go:293] "Starting Kubelet Volume Manager" Feb 12 19:17:22.209892 kubelet[2114]: I0212 19:17:22.209880 2114 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Feb 12 19:17:22.210416 kubelet[2114]: E0212 19:17:22.210383 2114 controller.go:146] failed to ensure lease exists, will retry in 200ms, error: Get "https://10.200.20.31:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.2-a-e08ac1c56f?timeout=10s": dial tcp 10.200.20.31:6443: connect: connection refused Feb 12 19:17:22.210661 kubelet[2114]: W0212 19:17:22.210624 2114 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://10.200.20.31:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.31:6443: connect: connection refused Feb 12 19:17:22.210738 kubelet[2114]: E0212 19:17:22.210729 2114 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.200.20.31:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.31:6443: connect: connection refused Feb 12 19:17:22.263782 kubelet[2114]: I0212 19:17:22.263751 2114 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 12 19:17:22.263782 kubelet[2114]: I0212 19:17:22.263770 2114 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 12 19:17:22.263782 kubelet[2114]: I0212 19:17:22.263788 2114 state_mem.go:36] "Initialized new in-memory state store" Feb 12 19:17:22.270550 kubelet[2114]: I0212 19:17:22.270484 2114 policy_none.go:49] "None policy: Start" Feb 12 19:17:22.271244 kubelet[2114]: I0212 19:17:22.271219 2114 memory_manager.go:169] "Starting memorymanager" policy="None" Feb 12 19:17:22.271349 kubelet[2114]: I0212 19:17:22.271339 2114 state_mem.go:35] "Initializing new in-memory state store" Feb 12 19:17:22.280161 systemd[1]: Created slice kubepods.slice. Feb 12 19:17:22.284542 systemd[1]: Created slice kubepods-burstable.slice. Feb 12 19:17:22.287838 systemd[1]: Created slice kubepods-besteffort.slice. Feb 12 19:17:22.296141 kubelet[2114]: I0212 19:17:22.295204 2114 manager.go:455] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 12 19:17:22.296141 kubelet[2114]: I0212 19:17:22.295411 2114 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 12 19:17:22.298630 kubelet[2114]: E0212 19:17:22.298250 2114 eviction_manager.go:261] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-3510.3.2-a-e08ac1c56f\" not found" Feb 12 19:17:22.311777 kubelet[2114]: I0212 19:17:22.311756 2114 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510.3.2-a-e08ac1c56f" Feb 12 19:17:22.312299 kubelet[2114]: E0212 19:17:22.312286 2114 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.200.20.31:6443/api/v1/nodes\": dial tcp 10.200.20.31:6443: connect: connection refused" node="ci-3510.3.2-a-e08ac1c56f" Feb 12 19:17:22.411067 kubelet[2114]: E0212 19:17:22.411032 2114 controller.go:146] failed to ensure lease exists, will retry in 400ms, error: Get "https://10.200.20.31:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.2-a-e08ac1c56f?timeout=10s": dial tcp 10.200.20.31:6443: connect: connection refused Feb 12 19:17:22.416300 kubelet[2114]: I0212 19:17:22.416272 2114 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv4 Feb 12 19:17:22.494507 kubelet[2114]: I0212 19:17:22.494476 2114 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv6 Feb 12 19:17:22.494721 kubelet[2114]: I0212 19:17:22.494708 2114 status_manager.go:176] "Starting to sync pod status with apiserver" Feb 12 19:17:22.494794 kubelet[2114]: I0212 19:17:22.494785 2114 kubelet.go:2113] "Starting kubelet main sync loop" Feb 12 19:17:22.494890 kubelet[2114]: E0212 19:17:22.494881 2114 kubelet.go:2137] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Feb 12 19:17:22.495407 kubelet[2114]: W0212 19:17:22.495374 2114 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://10.200.20.31:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.31:6443: connect: connection refused Feb 12 19:17:22.495536 kubelet[2114]: E0212 19:17:22.495522 2114 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.200.20.31:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.31:6443: connect: connection refused Feb 12 19:17:22.514571 kubelet[2114]: I0212 19:17:22.514548 2114 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510.3.2-a-e08ac1c56f" Feb 12 19:17:22.515046 kubelet[2114]: E0212 19:17:22.515030 2114 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.200.20.31:6443/api/v1/nodes\": dial tcp 10.200.20.31:6443: connect: connection refused" node="ci-3510.3.2-a-e08ac1c56f" Feb 12 19:17:22.595373 kubelet[2114]: I0212 19:17:22.595346 2114 topology_manager.go:210] "Topology Admit Handler" Feb 12 19:17:22.596821 kubelet[2114]: I0212 19:17:22.596803 2114 topology_manager.go:210] "Topology Admit Handler" Feb 12 19:17:22.598130 kubelet[2114]: I0212 19:17:22.598099 2114 topology_manager.go:210] "Topology Admit Handler" Feb 12 19:17:22.599294 kubelet[2114]: I0212 19:17:22.599182 2114 status_manager.go:698] "Failed to get status for pod" podUID=775e084c3ccdd96a9fc06d3ec2ddf61d pod="kube-system/kube-scheduler-ci-3510.3.2-a-e08ac1c56f" err="Get \"https://10.200.20.31:6443/api/v1/namespaces/kube-system/pods/kube-scheduler-ci-3510.3.2-a-e08ac1c56f\": dial tcp 10.200.20.31:6443: connect: connection refused" Feb 12 19:17:22.601448 kubelet[2114]: I0212 19:17:22.601431 2114 status_manager.go:698] "Failed to get status for pod" podUID=d3f122774b25aa8272b57c0e8f8d0800 pod="kube-system/kube-apiserver-ci-3510.3.2-a-e08ac1c56f" err="Get \"https://10.200.20.31:6443/api/v1/namespaces/kube-system/pods/kube-apiserver-ci-3510.3.2-a-e08ac1c56f\": dial tcp 10.200.20.31:6443: connect: connection refused" Feb 12 19:17:22.601816 kubelet[2114]: I0212 19:17:22.601794 2114 status_manager.go:698] "Failed to get status for pod" podUID=d871768a7561c2aa53899865f6350435 pod="kube-system/kube-controller-manager-ci-3510.3.2-a-e08ac1c56f" err="Get \"https://10.200.20.31:6443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ci-3510.3.2-a-e08ac1c56f\": dial tcp 10.200.20.31:6443: connect: connection refused" Feb 12 19:17:22.603971 systemd[1]: Created slice kubepods-burstable-pod775e084c3ccdd96a9fc06d3ec2ddf61d.slice. Feb 12 19:17:22.612894 kubelet[2114]: I0212 19:17:22.612817 2114 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d3f122774b25aa8272b57c0e8f8d0800-k8s-certs\") pod \"kube-apiserver-ci-3510.3.2-a-e08ac1c56f\" (UID: \"d3f122774b25aa8272b57c0e8f8d0800\") " pod="kube-system/kube-apiserver-ci-3510.3.2-a-e08ac1c56f" Feb 12 19:17:22.612894 kubelet[2114]: I0212 19:17:22.612854 2114 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d3f122774b25aa8272b57c0e8f8d0800-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3510.3.2-a-e08ac1c56f\" (UID: \"d3f122774b25aa8272b57c0e8f8d0800\") " pod="kube-system/kube-apiserver-ci-3510.3.2-a-e08ac1c56f" Feb 12 19:17:22.612894 kubelet[2114]: I0212 19:17:22.612887 2114 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/775e084c3ccdd96a9fc06d3ec2ddf61d-kubeconfig\") pod \"kube-scheduler-ci-3510.3.2-a-e08ac1c56f\" (UID: \"775e084c3ccdd96a9fc06d3ec2ddf61d\") " pod="kube-system/kube-scheduler-ci-3510.3.2-a-e08ac1c56f" Feb 12 19:17:22.613052 kubelet[2114]: I0212 19:17:22.612908 2114 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d3f122774b25aa8272b57c0e8f8d0800-ca-certs\") pod \"kube-apiserver-ci-3510.3.2-a-e08ac1c56f\" (UID: \"d3f122774b25aa8272b57c0e8f8d0800\") " pod="kube-system/kube-apiserver-ci-3510.3.2-a-e08ac1c56f" Feb 12 19:17:22.613052 kubelet[2114]: I0212 19:17:22.612928 2114 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d871768a7561c2aa53899865f6350435-ca-certs\") pod \"kube-controller-manager-ci-3510.3.2-a-e08ac1c56f\" (UID: \"d871768a7561c2aa53899865f6350435\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-e08ac1c56f" Feb 12 19:17:22.613052 kubelet[2114]: I0212 19:17:22.612948 2114 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d871768a7561c2aa53899865f6350435-flexvolume-dir\") pod \"kube-controller-manager-ci-3510.3.2-a-e08ac1c56f\" (UID: \"d871768a7561c2aa53899865f6350435\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-e08ac1c56f" Feb 12 19:17:22.613052 kubelet[2114]: I0212 19:17:22.613030 2114 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d871768a7561c2aa53899865f6350435-k8s-certs\") pod \"kube-controller-manager-ci-3510.3.2-a-e08ac1c56f\" (UID: \"d871768a7561c2aa53899865f6350435\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-e08ac1c56f" Feb 12 19:17:22.613142 kubelet[2114]: I0212 19:17:22.613076 2114 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d871768a7561c2aa53899865f6350435-kubeconfig\") pod \"kube-controller-manager-ci-3510.3.2-a-e08ac1c56f\" (UID: \"d871768a7561c2aa53899865f6350435\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-e08ac1c56f" Feb 12 19:17:22.613142 kubelet[2114]: I0212 19:17:22.613098 2114 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d871768a7561c2aa53899865f6350435-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3510.3.2-a-e08ac1c56f\" (UID: \"d871768a7561c2aa53899865f6350435\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-e08ac1c56f" Feb 12 19:17:22.615741 systemd[1]: Created slice kubepods-burstable-podd3f122774b25aa8272b57c0e8f8d0800.slice. Feb 12 19:17:22.628779 systemd[1]: Created slice kubepods-burstable-podd871768a7561c2aa53899865f6350435.slice. Feb 12 19:17:22.812030 kubelet[2114]: E0212 19:17:22.811920 2114 controller.go:146] failed to ensure lease exists, will retry in 800ms, error: Get "https://10.200.20.31:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.2-a-e08ac1c56f?timeout=10s": dial tcp 10.200.20.31:6443: connect: connection refused Feb 12 19:17:22.914201 env[1382]: time="2024-02-12T19:17:22.914158462Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3510.3.2-a-e08ac1c56f,Uid:775e084c3ccdd96a9fc06d3ec2ddf61d,Namespace:kube-system,Attempt:0,}" Feb 12 19:17:22.917189 kubelet[2114]: I0212 19:17:22.917167 2114 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510.3.2-a-e08ac1c56f" Feb 12 19:17:22.917530 kubelet[2114]: E0212 19:17:22.917513 2114 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.200.20.31:6443/api/v1/nodes\": dial tcp 10.200.20.31:6443: connect: connection refused" node="ci-3510.3.2-a-e08ac1c56f" Feb 12 19:17:22.919329 env[1382]: time="2024-02-12T19:17:22.919277856Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3510.3.2-a-e08ac1c56f,Uid:d3f122774b25aa8272b57c0e8f8d0800,Namespace:kube-system,Attempt:0,}" Feb 12 19:17:22.932389 env[1382]: time="2024-02-12T19:17:22.932352321Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3510.3.2-a-e08ac1c56f,Uid:d871768a7561c2aa53899865f6350435,Namespace:kube-system,Attempt:0,}" Feb 12 19:17:23.170630 kubelet[2114]: W0212 19:17:23.170486 2114 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://10.200.20.31:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.20.31:6443: connect: connection refused Feb 12 19:17:23.170630 kubelet[2114]: E0212 19:17:23.170568 2114 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.200.20.31:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.20.31:6443: connect: connection refused Feb 12 19:17:23.191344 kubelet[2114]: W0212 19:17:23.191270 2114 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://10.200.20.31:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.2-a-e08ac1c56f&limit=500&resourceVersion=0": dial tcp 10.200.20.31:6443: connect: connection refused Feb 12 19:17:23.191344 kubelet[2114]: E0212 19:17:23.191325 2114 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.200.20.31:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.2-a-e08ac1c56f&limit=500&resourceVersion=0": dial tcp 10.200.20.31:6443: connect: connection refused Feb 12 19:17:23.362918 kubelet[2114]: W0212 19:17:23.362858 2114 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://10.200.20.31:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.31:6443: connect: connection refused Feb 12 19:17:23.362918 kubelet[2114]: E0212 19:17:23.362895 2114 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.200.20.31:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.31:6443: connect: connection refused Feb 12 19:17:23.573864 kubelet[2114]: W0212 19:17:23.573828 2114 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://10.200.20.31:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.31:6443: connect: connection refused Feb 12 19:17:23.573864 kubelet[2114]: E0212 19:17:23.573865 2114 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.200.20.31:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.31:6443: connect: connection refused Feb 12 19:17:23.612298 kubelet[2114]: E0212 19:17:23.612246 2114 controller.go:146] failed to ensure lease exists, will retry in 1.6s, error: Get "https://10.200.20.31:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.2-a-e08ac1c56f?timeout=10s": dial tcp 10.200.20.31:6443: connect: connection refused Feb 12 19:17:23.655329 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1138127641.mount: Deactivated successfully. Feb 12 19:17:23.699131 env[1382]: time="2024-02-12T19:17:23.699080506Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:17:23.703944 env[1382]: time="2024-02-12T19:17:23.703914541Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:17:23.716945 env[1382]: time="2024-02-12T19:17:23.716905726Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:17:23.719202 kubelet[2114]: I0212 19:17:23.719175 2114 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510.3.2-a-e08ac1c56f" Feb 12 19:17:23.719542 kubelet[2114]: E0212 19:17:23.719492 2114 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.200.20.31:6443/api/v1/nodes\": dial tcp 10.200.20.31:6443: connect: connection refused" node="ci-3510.3.2-a-e08ac1c56f" Feb 12 19:17:23.725980 env[1382]: time="2024-02-12T19:17:23.725949916Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:17:23.736391 env[1382]: time="2024-02-12T19:17:23.736358224Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:17:23.741072 env[1382]: time="2024-02-12T19:17:23.741044419Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:17:23.750080 env[1382]: time="2024-02-12T19:17:23.750054369Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:17:23.755428 env[1382]: time="2024-02-12T19:17:23.755404363Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:17:23.761130 env[1382]: time="2024-02-12T19:17:23.761106677Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:17:23.766827 env[1382]: time="2024-02-12T19:17:23.766803470Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:17:23.769823 env[1382]: time="2024-02-12T19:17:23.769792867Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:17:23.788550 env[1382]: time="2024-02-12T19:17:23.788512006Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:17:23.886485 env[1382]: time="2024-02-12T19:17:23.885990778Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 12 19:17:23.886681 env[1382]: time="2024-02-12T19:17:23.886644897Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 12 19:17:23.886762 env[1382]: time="2024-02-12T19:17:23.886742297Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 12 19:17:23.887115 env[1382]: time="2024-02-12T19:17:23.887026257Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/0ab2ec75e74be1443852d7c3e36ff3359cf64501373c969bf2a53d12cefc5958 pid=2190 runtime=io.containerd.runc.v2 Feb 12 19:17:23.905719 systemd[1]: Started cri-containerd-0ab2ec75e74be1443852d7c3e36ff3359cf64501373c969bf2a53d12cefc5958.scope. Feb 12 19:17:23.930580 env[1382]: time="2024-02-12T19:17:23.926985052Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 12 19:17:23.930580 env[1382]: time="2024-02-12T19:17:23.927027932Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 12 19:17:23.930580 env[1382]: time="2024-02-12T19:17:23.927037852Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 12 19:17:23.935138 env[1382]: time="2024-02-12T19:17:23.927192332Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/6e9ff90c30b76babbfe2bdbe65804b2ce2786d26c87784055e7a3009efc02c42 pid=2223 runtime=io.containerd.runc.v2 Feb 12 19:17:23.942127 systemd[1]: Started cri-containerd-6e9ff90c30b76babbfe2bdbe65804b2ce2786d26c87784055e7a3009efc02c42.scope. Feb 12 19:17:23.943991 env[1382]: time="2024-02-12T19:17:23.941294516Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 12 19:17:23.943991 env[1382]: time="2024-02-12T19:17:23.941333836Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 12 19:17:23.943991 env[1382]: time="2024-02-12T19:17:23.941343316Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 12 19:17:23.943991 env[1382]: time="2024-02-12T19:17:23.941524276Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/a9ad48ade31a1a80132cbeeadd2410d96b9990470f6e1e6b2fe00682b810b2ee pid=2252 runtime=io.containerd.runc.v2 Feb 12 19:17:23.952109 env[1382]: time="2024-02-12T19:17:23.952065104Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3510.3.2-a-e08ac1c56f,Uid:775e084c3ccdd96a9fc06d3ec2ddf61d,Namespace:kube-system,Attempt:0,} returns sandbox id \"0ab2ec75e74be1443852d7c3e36ff3359cf64501373c969bf2a53d12cefc5958\"" Feb 12 19:17:23.956688 env[1382]: time="2024-02-12T19:17:23.956651459Z" level=info msg="CreateContainer within sandbox \"0ab2ec75e74be1443852d7c3e36ff3359cf64501373c969bf2a53d12cefc5958\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Feb 12 19:17:23.967016 systemd[1]: Started cri-containerd-a9ad48ade31a1a80132cbeeadd2410d96b9990470f6e1e6b2fe00682b810b2ee.scope. Feb 12 19:17:23.992963 env[1382]: time="2024-02-12T19:17:23.992925979Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3510.3.2-a-e08ac1c56f,Uid:d871768a7561c2aa53899865f6350435,Namespace:kube-system,Attempt:0,} returns sandbox id \"6e9ff90c30b76babbfe2bdbe65804b2ce2786d26c87784055e7a3009efc02c42\"" Feb 12 19:17:23.996002 env[1382]: time="2024-02-12T19:17:23.995954135Z" level=info msg="CreateContainer within sandbox \"6e9ff90c30b76babbfe2bdbe65804b2ce2786d26c87784055e7a3009efc02c42\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Feb 12 19:17:24.005685 env[1382]: time="2024-02-12T19:17:24.005648565Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3510.3.2-a-e08ac1c56f,Uid:d3f122774b25aa8272b57c0e8f8d0800,Namespace:kube-system,Attempt:0,} returns sandbox id \"a9ad48ade31a1a80132cbeeadd2410d96b9990470f6e1e6b2fe00682b810b2ee\"" Feb 12 19:17:24.008323 env[1382]: time="2024-02-12T19:17:24.008289242Z" level=info msg="CreateContainer within sandbox \"a9ad48ade31a1a80132cbeeadd2410d96b9990470f6e1e6b2fe00682b810b2ee\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Feb 12 19:17:24.083730 env[1382]: time="2024-02-12T19:17:24.083676319Z" level=info msg="CreateContainer within sandbox \"0ab2ec75e74be1443852d7c3e36ff3359cf64501373c969bf2a53d12cefc5958\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"a2dcc0e4ed2a6b0662dc6c8042eb48ed4b604abcc9cb3df16473aab3fba240c2\"" Feb 12 19:17:24.084334 env[1382]: time="2024-02-12T19:17:24.084310079Z" level=info msg="StartContainer for \"a2dcc0e4ed2a6b0662dc6c8042eb48ed4b604abcc9cb3df16473aab3fba240c2\"" Feb 12 19:17:24.100388 systemd[1]: Started cri-containerd-a2dcc0e4ed2a6b0662dc6c8042eb48ed4b604abcc9cb3df16473aab3fba240c2.scope. Feb 12 19:17:24.167386 env[1382]: time="2024-02-12T19:17:24.167267148Z" level=info msg="CreateContainer within sandbox \"a9ad48ade31a1a80132cbeeadd2410d96b9990470f6e1e6b2fe00682b810b2ee\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"33ab5f2e5b50539b4ef50dc669f092a466fa2e414d6bb684f008351c45f24073\"" Feb 12 19:17:24.168160 env[1382]: time="2024-02-12T19:17:24.168124427Z" level=info msg="StartContainer for \"33ab5f2e5b50539b4ef50dc669f092a466fa2e414d6bb684f008351c45f24073\"" Feb 12 19:17:24.173348 env[1382]: time="2024-02-12T19:17:24.173287182Z" level=info msg="StartContainer for \"a2dcc0e4ed2a6b0662dc6c8042eb48ed4b604abcc9cb3df16473aab3fba240c2\" returns successfully" Feb 12 19:17:24.188124 systemd[1]: Started cri-containerd-33ab5f2e5b50539b4ef50dc669f092a466fa2e414d6bb684f008351c45f24073.scope. Feb 12 19:17:24.193672 env[1382]: time="2024-02-12T19:17:24.193627159Z" level=info msg="CreateContainer within sandbox \"6e9ff90c30b76babbfe2bdbe65804b2ce2786d26c87784055e7a3009efc02c42\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"8f04cac450aa8e5d51b207a24f171382f89e28ea5c03a25f20437a757cf46a2f\"" Feb 12 19:17:24.194247 env[1382]: time="2024-02-12T19:17:24.194214039Z" level=info msg="StartContainer for \"8f04cac450aa8e5d51b207a24f171382f89e28ea5c03a25f20437a757cf46a2f\"" Feb 12 19:17:24.211256 systemd[1]: Started cri-containerd-8f04cac450aa8e5d51b207a24f171382f89e28ea5c03a25f20437a757cf46a2f.scope. Feb 12 19:17:24.214893 kubelet[2114]: E0212 19:17:24.214820 2114 certificate_manager.go:471] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.200.20.31:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.200.20.31:6443: connect: connection refused Feb 12 19:17:24.265668 env[1382]: time="2024-02-12T19:17:24.265621961Z" level=info msg="StartContainer for \"8f04cac450aa8e5d51b207a24f171382f89e28ea5c03a25f20437a757cf46a2f\" returns successfully" Feb 12 19:17:24.275711 env[1382]: time="2024-02-12T19:17:24.275658750Z" level=info msg="StartContainer for \"33ab5f2e5b50539b4ef50dc669f092a466fa2e414d6bb684f008351c45f24073\" returns successfully" Feb 12 19:17:25.321827 kubelet[2114]: I0212 19:17:25.321803 2114 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510.3.2-a-e08ac1c56f" Feb 12 19:17:26.981669 kubelet[2114]: E0212 19:17:26.981631 2114 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-3510.3.2-a-e08ac1c56f\" not found" node="ci-3510.3.2-a-e08ac1c56f" Feb 12 19:17:27.005724 kubelet[2114]: I0212 19:17:27.005695 2114 kubelet_node_status.go:73] "Successfully registered node" node="ci-3510.3.2-a-e08ac1c56f" Feb 12 19:17:27.101777 kubelet[2114]: E0212 19:17:27.101674 2114 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ci-3510.3.2-a-e08ac1c56f.17b3339f1d77cf34", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ci-3510.3.2-a-e08ac1c56f", UID:"ci-3510.3.2-a-e08ac1c56f", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"ci-3510.3.2-a-e08ac1c56f"}, FirstTimestamp:time.Date(2024, time.February, 12, 19, 17, 22, 190757684, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 19, 17, 22, 190757684, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!) Feb 12 19:17:27.192387 kubelet[2114]: I0212 19:17:27.192346 2114 apiserver.go:52] "Watching apiserver" Feb 12 19:17:27.210887 kubelet[2114]: I0212 19:17:27.210846 2114 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Feb 12 19:17:27.244318 kubelet[2114]: I0212 19:17:27.244230 2114 reconciler.go:41] "Reconciler: start to sync state" Feb 12 19:17:27.261611 kubelet[2114]: E0212 19:17:27.261516 2114 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ci-3510.3.2-a-e08ac1c56f.17b3339f1e7425e2", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ci-3510.3.2-a-e08ac1c56f", UID:"ci-3510.3.2-a-e08ac1c56f", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"InvalidDiskCapacity", Message:"invalid capacity 0 on image filesystem", Source:v1.EventSource{Component:"kubelet", Host:"ci-3510.3.2-a-e08ac1c56f"}, FirstTimestamp:time.Date(2024, time.February, 12, 19, 17, 22, 207294946, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 19, 17, 22, 207294946, time.Local), Count:1, Type:"Warning", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!) Feb 12 19:17:27.351847 kubelet[2114]: E0212 19:17:27.351739 2114 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ci-3510.3.2-a-e08ac1c56f.17b3339f21c8bbfa", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ci-3510.3.2-a-e08ac1c56f", UID:"ci-3510.3.2-a-e08ac1c56f", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node ci-3510.3.2-a-e08ac1c56f status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"ci-3510.3.2-a-e08ac1c56f"}, FirstTimestamp:time.Date(2024, time.February, 12, 19, 17, 22, 263170042, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 19, 17, 22, 263170042, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!) Feb 12 19:17:27.442330 kubelet[2114]: E0212 19:17:27.442234 2114 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ci-3510.3.2-a-e08ac1c56f.17b3339f21c8de82", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ci-3510.3.2-a-e08ac1c56f", UID:"ci-3510.3.2-a-e08ac1c56f", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node ci-3510.3.2-a-e08ac1c56f status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"ci-3510.3.2-a-e08ac1c56f"}, FirstTimestamp:time.Date(2024, time.February, 12, 19, 17, 22, 263178882, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 19, 17, 22, 263178882, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!) Feb 12 19:17:27.599602 kubelet[2114]: E0212 19:17:27.599408 2114 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ci-3510.3.2-a-e08ac1c56f.17b3339f21c8e99a", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ci-3510.3.2-a-e08ac1c56f", UID:"ci-3510.3.2-a-e08ac1c56f", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node ci-3510.3.2-a-e08ac1c56f status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"ci-3510.3.2-a-e08ac1c56f"}, FirstTimestamp:time.Date(2024, time.February, 12, 19, 17, 22, 263181722, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 19, 17, 22, 263181722, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!) Feb 12 19:17:27.655405 kubelet[2114]: E0212 19:17:27.655304 2114 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ci-3510.3.2-a-e08ac1c56f.17b3339f23ec5f99", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ci-3510.3.2-a-e08ac1c56f", UID:"ci-3510.3.2-a-e08ac1c56f", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeAllocatableEnforced", Message:"Updated Node Allocatable limit across pods", Source:v1.EventSource{Component:"kubelet", Host:"ci-3510.3.2-a-e08ac1c56f"}, FirstTimestamp:time.Date(2024, time.February, 12, 19, 17, 22, 299060121, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 19, 17, 22, 299060121, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!) Feb 12 19:17:27.801166 kubelet[2114]: E0212 19:17:27.801074 2114 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ci-3510.3.2-a-e08ac1c56f.17b3339f21c8bbfa", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ci-3510.3.2-a-e08ac1c56f", UID:"ci-3510.3.2-a-e08ac1c56f", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node ci-3510.3.2-a-e08ac1c56f status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"ci-3510.3.2-a-e08ac1c56f"}, FirstTimestamp:time.Date(2024, time.February, 12, 19, 17, 22, 263170042, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 19, 17, 22, 311711627, time.Local), Count:2, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!) Feb 12 19:17:27.859716 kubelet[2114]: E0212 19:17:27.859555 2114 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ci-3510.3.2-a-e08ac1c56f.17b3339f21c8de82", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ci-3510.3.2-a-e08ac1c56f", UID:"ci-3510.3.2-a-e08ac1c56f", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node ci-3510.3.2-a-e08ac1c56f status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"ci-3510.3.2-a-e08ac1c56f"}, FirstTimestamp:time.Date(2024, time.February, 12, 19, 17, 22, 263178882, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 19, 17, 22, 311716507, time.Local), Count:2, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!) Feb 12 19:17:27.916328 kubelet[2114]: E0212 19:17:27.916225 2114 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ci-3510.3.2-a-e08ac1c56f.17b3339f21c8e99a", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ci-3510.3.2-a-e08ac1c56f", UID:"ci-3510.3.2-a-e08ac1c56f", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node ci-3510.3.2-a-e08ac1c56f status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"ci-3510.3.2-a-e08ac1c56f"}, FirstTimestamp:time.Date(2024, time.February, 12, 19, 17, 22, 263181722, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 19, 17, 22, 311719267, time.Local), Count:2, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!) Feb 12 19:17:27.976122 kubelet[2114]: E0212 19:17:27.976010 2114 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ci-3510.3.2-a-e08ac1c56f.17b3339f21c8bbfa", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ci-3510.3.2-a-e08ac1c56f", UID:"ci-3510.3.2-a-e08ac1c56f", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node ci-3510.3.2-a-e08ac1c56f status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"ci-3510.3.2-a-e08ac1c56f"}, FirstTimestamp:time.Date(2024, time.February, 12, 19, 17, 22, 263170042, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 19, 17, 22, 514508436, time.Local), Count:3, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!) Feb 12 19:17:28.159830 kubelet[2114]: E0212 19:17:28.159648 2114 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ci-3510.3.2-a-e08ac1c56f.17b3339f21c8de82", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ci-3510.3.2-a-e08ac1c56f", UID:"ci-3510.3.2-a-e08ac1c56f", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node ci-3510.3.2-a-e08ac1c56f status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"ci-3510.3.2-a-e08ac1c56f"}, FirstTimestamp:time.Date(2024, time.February, 12, 19, 17, 22, 263178882, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 19, 17, 22, 514517236, time.Local), Count:3, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!) Feb 12 19:17:29.374885 systemd[1]: Reloading. Feb 12 19:17:29.443462 /usr/lib/systemd/system-generators/torcx-generator[2442]: time="2024-02-12T19:17:29Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 12 19:17:29.443492 /usr/lib/systemd/system-generators/torcx-generator[2442]: time="2024-02-12T19:17:29Z" level=info msg="torcx already run" Feb 12 19:17:29.532880 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 12 19:17:29.533048 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 12 19:17:29.550113 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 12 19:17:29.671988 systemd[1]: Stopping kubelet.service... Feb 12 19:17:29.688962 systemd[1]: kubelet.service: Deactivated successfully. Feb 12 19:17:29.689155 systemd[1]: Stopped kubelet.service. Feb 12 19:17:29.689202 systemd[1]: kubelet.service: Consumed 1.218s CPU time. Feb 12 19:17:29.691094 systemd[1]: Started kubelet.service. Feb 12 19:17:29.751614 kubelet[2501]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI. Feb 12 19:17:29.751614 kubelet[2501]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 12 19:17:29.751614 kubelet[2501]: I0212 19:17:29.751451 2501 server.go:198] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 12 19:17:29.760005 kubelet[2501]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI. Feb 12 19:17:29.760005 kubelet[2501]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 12 19:17:29.762882 kubelet[2501]: I0212 19:17:29.762855 2501 server.go:412] "Kubelet version" kubeletVersion="v1.26.5" Feb 12 19:17:29.762882 kubelet[2501]: I0212 19:17:29.762879 2501 server.go:414] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 12 19:17:29.763783 kubelet[2501]: I0212 19:17:29.763109 2501 server.go:836] "Client rotation is on, will bootstrap in background" Feb 12 19:17:29.764308 kubelet[2501]: I0212 19:17:29.764282 2501 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Feb 12 19:17:29.765143 kubelet[2501]: I0212 19:17:29.765124 2501 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 12 19:17:29.767298 kubelet[2501]: W0212 19:17:29.767278 2501 machine.go:65] Cannot read vendor id correctly, set empty. Feb 12 19:17:29.767929 kubelet[2501]: I0212 19:17:29.767912 2501 server.go:659] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 12 19:17:29.768145 kubelet[2501]: I0212 19:17:29.768129 2501 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 12 19:17:29.768209 kubelet[2501]: I0212 19:17:29.768195 2501 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: KubeletOOMScoreAdj:-999 ContainerRuntime: CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:systemd KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:imagefs.available Operator:LessThan Value:{Quantity: Percentage:0.15} GracePeriod:0s MinReclaim:} {Signal:memory.available Operator:LessThan Value:{Quantity:100Mi Percentage:0} GracePeriod:0s MinReclaim:} {Signal:nodefs.available Operator:LessThan Value:{Quantity: Percentage:0.1} GracePeriod:0s MinReclaim:} {Signal:nodefs.inodesFree Operator:LessThan Value:{Quantity: Percentage:0.05} GracePeriod:0s MinReclaim:}]} QOSReserved:map[] CPUManagerPolicy:none CPUManagerPolicyOptions:map[] ExperimentalTopologyManagerScope:container CPUManagerReconcilePeriod:10s ExperimentalMemoryManagerPolicy:None ExperimentalMemoryManagerReservedMemory:[] ExperimentalPodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms ExperimentalTopologyManagerPolicy:none ExperimentalTopologyManagerPolicyOptions:map[]} Feb 12 19:17:29.768276 kubelet[2501]: I0212 19:17:29.768216 2501 topology_manager.go:134] "Creating topology manager with policy per scope" topologyPolicyName="none" topologyScopeName="container" Feb 12 19:17:29.768276 kubelet[2501]: I0212 19:17:29.768227 2501 container_manager_linux.go:308] "Creating device plugin manager" Feb 12 19:17:29.768276 kubelet[2501]: I0212 19:17:29.768253 2501 state_mem.go:36] "Initialized new in-memory state store" Feb 12 19:17:29.776761 kubelet[2501]: I0212 19:17:29.776731 2501 kubelet.go:398] "Attempting to sync node with API server" Feb 12 19:17:29.776899 kubelet[2501]: I0212 19:17:29.776888 2501 kubelet.go:286] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 12 19:17:29.776976 kubelet[2501]: I0212 19:17:29.776966 2501 kubelet.go:297] "Adding apiserver pod source" Feb 12 19:17:29.777034 kubelet[2501]: I0212 19:17:29.777025 2501 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 12 19:17:29.785852 kubelet[2501]: I0212 19:17:29.782066 2501 kuberuntime_manager.go:244] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Feb 12 19:17:29.785852 kubelet[2501]: I0212 19:17:29.782420 2501 server.go:1186] "Started kubelet" Feb 12 19:17:29.785852 kubelet[2501]: I0212 19:17:29.783801 2501 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 12 19:17:29.786146 kubelet[2501]: I0212 19:17:29.786115 2501 server.go:161] "Starting to listen" address="0.0.0.0" port=10250 Feb 12 19:17:29.786703 kubelet[2501]: I0212 19:17:29.786677 2501 server.go:451] "Adding debug handlers to kubelet server" Feb 12 19:17:29.792035 kubelet[2501]: I0212 19:17:29.792010 2501 volume_manager.go:293] "Starting Kubelet Volume Manager" Feb 12 19:17:29.795518 kubelet[2501]: I0212 19:17:29.793849 2501 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Feb 12 19:17:29.809466 kubelet[2501]: E0212 19:17:29.809440 2501 cri_stats_provider.go:455] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Feb 12 19:17:29.809466 kubelet[2501]: E0212 19:17:29.809470 2501 kubelet.go:1386] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 12 19:17:29.820432 sudo[2528]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Feb 12 19:17:29.820987 sudo[2528]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0) Feb 12 19:17:29.850696 kubelet[2501]: I0212 19:17:29.850664 2501 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv4 Feb 12 19:17:29.863898 kubelet[2501]: I0212 19:17:29.863864 2501 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv6 Feb 12 19:17:29.863898 kubelet[2501]: I0212 19:17:29.863890 2501 status_manager.go:176] "Starting to sync pod status with apiserver" Feb 12 19:17:29.863898 kubelet[2501]: I0212 19:17:29.863905 2501 kubelet.go:2113] "Starting kubelet main sync loop" Feb 12 19:17:29.864059 kubelet[2501]: E0212 19:17:29.863947 2501 kubelet.go:2137] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 12 19:17:29.904507 kubelet[2501]: I0212 19:17:29.904468 2501 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510.3.2-a-e08ac1c56f" Feb 12 19:17:29.917573 kubelet[2501]: I0212 19:17:29.917543 2501 kubelet_node_status.go:108] "Node was previously registered" node="ci-3510.3.2-a-e08ac1c56f" Feb 12 19:17:29.917694 kubelet[2501]: I0212 19:17:29.917610 2501 kubelet_node_status.go:73] "Successfully registered node" node="ci-3510.3.2-a-e08ac1c56f" Feb 12 19:17:29.926432 kubelet[2501]: I0212 19:17:29.926341 2501 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 12 19:17:29.926432 kubelet[2501]: I0212 19:17:29.926366 2501 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 12 19:17:29.926432 kubelet[2501]: I0212 19:17:29.926381 2501 state_mem.go:36] "Initialized new in-memory state store" Feb 12 19:17:29.926609 kubelet[2501]: I0212 19:17:29.926531 2501 state_mem.go:88] "Updated default CPUSet" cpuSet="" Feb 12 19:17:29.926609 kubelet[2501]: I0212 19:17:29.926546 2501 state_mem.go:96] "Updated CPUSet assignments" assignments=map[] Feb 12 19:17:29.926609 kubelet[2501]: I0212 19:17:29.926559 2501 policy_none.go:49] "None policy: Start" Feb 12 19:17:29.928073 kubelet[2501]: I0212 19:17:29.928040 2501 memory_manager.go:169] "Starting memorymanager" policy="None" Feb 12 19:17:29.928073 kubelet[2501]: I0212 19:17:29.928066 2501 state_mem.go:35] "Initializing new in-memory state store" Feb 12 19:17:29.928202 kubelet[2501]: I0212 19:17:29.928183 2501 state_mem.go:75] "Updated machine memory state" Feb 12 19:17:29.931324 kubelet[2501]: I0212 19:17:29.931300 2501 manager.go:455] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 12 19:17:29.932548 kubelet[2501]: I0212 19:17:29.931965 2501 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 12 19:17:29.964975 kubelet[2501]: I0212 19:17:29.964943 2501 topology_manager.go:210] "Topology Admit Handler" Feb 12 19:17:29.965196 kubelet[2501]: I0212 19:17:29.965183 2501 topology_manager.go:210] "Topology Admit Handler" Feb 12 19:17:29.965311 kubelet[2501]: I0212 19:17:29.965301 2501 topology_manager.go:210] "Topology Admit Handler" Feb 12 19:17:29.994997 kubelet[2501]: I0212 19:17:29.994973 2501 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d871768a7561c2aa53899865f6350435-kubeconfig\") pod \"kube-controller-manager-ci-3510.3.2-a-e08ac1c56f\" (UID: \"d871768a7561c2aa53899865f6350435\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-e08ac1c56f" Feb 12 19:17:29.995175 kubelet[2501]: I0212 19:17:29.995165 2501 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d871768a7561c2aa53899865f6350435-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3510.3.2-a-e08ac1c56f\" (UID: \"d871768a7561c2aa53899865f6350435\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-e08ac1c56f" Feb 12 19:17:29.995294 kubelet[2501]: I0212 19:17:29.995284 2501 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/775e084c3ccdd96a9fc06d3ec2ddf61d-kubeconfig\") pod \"kube-scheduler-ci-3510.3.2-a-e08ac1c56f\" (UID: \"775e084c3ccdd96a9fc06d3ec2ddf61d\") " pod="kube-system/kube-scheduler-ci-3510.3.2-a-e08ac1c56f" Feb 12 19:17:29.995400 kubelet[2501]: I0212 19:17:29.995391 2501 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d3f122774b25aa8272b57c0e8f8d0800-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3510.3.2-a-e08ac1c56f\" (UID: \"d3f122774b25aa8272b57c0e8f8d0800\") " pod="kube-system/kube-apiserver-ci-3510.3.2-a-e08ac1c56f" Feb 12 19:17:29.995542 kubelet[2501]: I0212 19:17:29.995532 2501 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d871768a7561c2aa53899865f6350435-k8s-certs\") pod \"kube-controller-manager-ci-3510.3.2-a-e08ac1c56f\" (UID: \"d871768a7561c2aa53899865f6350435\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-e08ac1c56f" Feb 12 19:17:29.995632 kubelet[2501]: I0212 19:17:29.995624 2501 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d871768a7561c2aa53899865f6350435-ca-certs\") pod \"kube-controller-manager-ci-3510.3.2-a-e08ac1c56f\" (UID: \"d871768a7561c2aa53899865f6350435\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-e08ac1c56f" Feb 12 19:17:29.995733 kubelet[2501]: I0212 19:17:29.995724 2501 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d871768a7561c2aa53899865f6350435-flexvolume-dir\") pod \"kube-controller-manager-ci-3510.3.2-a-e08ac1c56f\" (UID: \"d871768a7561c2aa53899865f6350435\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-e08ac1c56f" Feb 12 19:17:29.995832 kubelet[2501]: I0212 19:17:29.995824 2501 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d3f122774b25aa8272b57c0e8f8d0800-ca-certs\") pod \"kube-apiserver-ci-3510.3.2-a-e08ac1c56f\" (UID: \"d3f122774b25aa8272b57c0e8f8d0800\") " pod="kube-system/kube-apiserver-ci-3510.3.2-a-e08ac1c56f" Feb 12 19:17:29.995936 kubelet[2501]: I0212 19:17:29.995927 2501 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d3f122774b25aa8272b57c0e8f8d0800-k8s-certs\") pod \"kube-apiserver-ci-3510.3.2-a-e08ac1c56f\" (UID: \"d3f122774b25aa8272b57c0e8f8d0800\") " pod="kube-system/kube-apiserver-ci-3510.3.2-a-e08ac1c56f" Feb 12 19:17:30.355079 sudo[2528]: pam_unix(sudo:session): session closed for user root Feb 12 19:17:30.782215 kubelet[2501]: I0212 19:17:30.782184 2501 apiserver.go:52] "Watching apiserver" Feb 12 19:17:30.794637 kubelet[2501]: I0212 19:17:30.794607 2501 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Feb 12 19:17:30.799049 kubelet[2501]: I0212 19:17:30.799024 2501 reconciler.go:41] "Reconciler: start to sync state" Feb 12 19:17:30.986598 kubelet[2501]: E0212 19:17:30.986569 2501 kubelet.go:1802] "Failed creating a mirror pod for" err="pods \"kube-scheduler-ci-3510.3.2-a-e08ac1c56f\" already exists" pod="kube-system/kube-scheduler-ci-3510.3.2-a-e08ac1c56f" Feb 12 19:17:31.386285 kubelet[2501]: E0212 19:17:31.386257 2501 kubelet.go:1802] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-3510.3.2-a-e08ac1c56f\" already exists" pod="kube-system/kube-apiserver-ci-3510.3.2-a-e08ac1c56f" Feb 12 19:17:31.586178 kubelet[2501]: E0212 19:17:31.586152 2501 kubelet.go:1802] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ci-3510.3.2-a-e08ac1c56f\" already exists" pod="kube-system/kube-controller-manager-ci-3510.3.2-a-e08ac1c56f" Feb 12 19:17:32.184008 kubelet[2501]: I0212 19:17:32.183970 2501 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-3510.3.2-a-e08ac1c56f" podStartSLOduration=3.183933349 pod.CreationTimestamp="2024-02-12 19:17:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-12 19:17:31.790905841 +0000 UTC m=+2.097010330" watchObservedRunningTime="2024-02-12 19:17:32.183933349 +0000 UTC m=+2.490037838" Feb 12 19:17:32.184380 kubelet[2501]: I0212 19:17:32.184070 2501 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-3510.3.2-a-e08ac1c56f" podStartSLOduration=3.184053589 pod.CreationTimestamp="2024-02-12 19:17:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-12 19:17:32.183628909 +0000 UTC m=+2.489733398" watchObservedRunningTime="2024-02-12 19:17:32.184053589 +0000 UTC m=+2.490158078" Feb 12 19:17:32.584339 kubelet[2501]: I0212 19:17:32.584304 2501 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-3510.3.2-a-e08ac1c56f" podStartSLOduration=3.584257813 pod.CreationTimestamp="2024-02-12 19:17:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-12 19:17:32.583716454 +0000 UTC m=+2.889820943" watchObservedRunningTime="2024-02-12 19:17:32.584257813 +0000 UTC m=+2.890362302" Feb 12 19:17:32.669310 sudo[1708]: pam_unix(sudo:session): session closed for user root Feb 12 19:17:32.748642 sshd[1705]: pam_unix(sshd:session): session closed for user core Feb 12 19:17:32.751486 systemd-logind[1367]: Session 7 logged out. Waiting for processes to exit. Feb 12 19:17:32.752234 systemd[1]: sshd@4-10.200.20.31:22-10.200.12.6:59870.service: Deactivated successfully. Feb 12 19:17:32.752984 systemd[1]: session-7.scope: Deactivated successfully. Feb 12 19:17:32.753161 systemd[1]: session-7.scope: Consumed 7.386s CPU time. Feb 12 19:17:32.753479 systemd-logind[1367]: Removed session 7. Feb 12 19:17:43.433134 kubelet[2501]: I0212 19:17:43.433099 2501 kuberuntime_manager.go:1114] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Feb 12 19:17:43.433847 env[1382]: time="2024-02-12T19:17:43.433758219Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Feb 12 19:17:43.434219 kubelet[2501]: I0212 19:17:43.434195 2501 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Feb 12 19:17:44.238803 kubelet[2501]: I0212 19:17:44.238770 2501 topology_manager.go:210] "Topology Admit Handler" Feb 12 19:17:44.243575 systemd[1]: Created slice kubepods-besteffort-podd62ea721_a023_4004_a073_62eac1e4d96d.slice. Feb 12 19:17:44.248799 kubelet[2501]: I0212 19:17:44.248770 2501 topology_manager.go:210] "Topology Admit Handler" Feb 12 19:17:44.253864 systemd[1]: Created slice kubepods-burstable-pod849642ce_03a5_4154_ba6d_ebabf8c717fc.slice. Feb 12 19:17:44.263804 kubelet[2501]: W0212 19:17:44.263709 2501 reflector.go:424] object-"kube-system"/"hubble-server-certs": failed to list *v1.Secret: secrets "hubble-server-certs" is forbidden: User "system:node:ci-3510.3.2-a-e08ac1c56f" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510.3.2-a-e08ac1c56f' and this object Feb 12 19:17:44.263804 kubelet[2501]: E0212 19:17:44.263761 2501 reflector.go:140] object-"kube-system"/"hubble-server-certs": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "hubble-server-certs" is forbidden: User "system:node:ci-3510.3.2-a-e08ac1c56f" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510.3.2-a-e08ac1c56f' and this object Feb 12 19:17:44.263967 kubelet[2501]: W0212 19:17:44.263837 2501 reflector.go:424] object-"kube-system"/"cilium-clustermesh": failed to list *v1.Secret: secrets "cilium-clustermesh" is forbidden: User "system:node:ci-3510.3.2-a-e08ac1c56f" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510.3.2-a-e08ac1c56f' and this object Feb 12 19:17:44.263967 kubelet[2501]: E0212 19:17:44.263848 2501 reflector.go:140] object-"kube-system"/"cilium-clustermesh": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "cilium-clustermesh" is forbidden: User "system:node:ci-3510.3.2-a-e08ac1c56f" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510.3.2-a-e08ac1c56f' and this object Feb 12 19:17:44.263967 kubelet[2501]: W0212 19:17:44.263878 2501 reflector.go:424] object-"kube-system"/"cilium-config": failed to list *v1.ConfigMap: configmaps "cilium-config" is forbidden: User "system:node:ci-3510.3.2-a-e08ac1c56f" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510.3.2-a-e08ac1c56f' and this object Feb 12 19:17:44.263967 kubelet[2501]: E0212 19:17:44.263887 2501 reflector.go:140] object-"kube-system"/"cilium-config": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "cilium-config" is forbidden: User "system:node:ci-3510.3.2-a-e08ac1c56f" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510.3.2-a-e08ac1c56f' and this object Feb 12 19:17:44.272707 kubelet[2501]: I0212 19:17:44.272666 2501 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/849642ce-03a5-4154-ba6d-ebabf8c717fc-clustermesh-secrets\") pod \"cilium-qvj6k\" (UID: \"849642ce-03a5-4154-ba6d-ebabf8c717fc\") " pod="kube-system/cilium-qvj6k" Feb 12 19:17:44.272707 kubelet[2501]: I0212 19:17:44.272707 2501 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d62ea721-a023-4004-a073-62eac1e4d96d-xtables-lock\") pod \"kube-proxy-cvghw\" (UID: \"d62ea721-a023-4004-a073-62eac1e4d96d\") " pod="kube-system/kube-proxy-cvghw" Feb 12 19:17:44.272879 kubelet[2501]: I0212 19:17:44.272730 2501 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/849642ce-03a5-4154-ba6d-ebabf8c717fc-bpf-maps\") pod \"cilium-qvj6k\" (UID: \"849642ce-03a5-4154-ba6d-ebabf8c717fc\") " pod="kube-system/cilium-qvj6k" Feb 12 19:17:44.272879 kubelet[2501]: I0212 19:17:44.272759 2501 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/849642ce-03a5-4154-ba6d-ebabf8c717fc-hostproc\") pod \"cilium-qvj6k\" (UID: \"849642ce-03a5-4154-ba6d-ebabf8c717fc\") " pod="kube-system/cilium-qvj6k" Feb 12 19:17:44.272879 kubelet[2501]: I0212 19:17:44.272778 2501 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/849642ce-03a5-4154-ba6d-ebabf8c717fc-lib-modules\") pod \"cilium-qvj6k\" (UID: \"849642ce-03a5-4154-ba6d-ebabf8c717fc\") " pod="kube-system/cilium-qvj6k" Feb 12 19:17:44.272879 kubelet[2501]: I0212 19:17:44.272799 2501 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/849642ce-03a5-4154-ba6d-ebabf8c717fc-cilium-cgroup\") pod \"cilium-qvj6k\" (UID: \"849642ce-03a5-4154-ba6d-ebabf8c717fc\") " pod="kube-system/cilium-qvj6k" Feb 12 19:17:44.272879 kubelet[2501]: I0212 19:17:44.272826 2501 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/849642ce-03a5-4154-ba6d-ebabf8c717fc-hubble-tls\") pod \"cilium-qvj6k\" (UID: \"849642ce-03a5-4154-ba6d-ebabf8c717fc\") " pod="kube-system/cilium-qvj6k" Feb 12 19:17:44.272879 kubelet[2501]: I0212 19:17:44.272846 2501 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d62ea721-a023-4004-a073-62eac1e4d96d-lib-modules\") pod \"kube-proxy-cvghw\" (UID: \"d62ea721-a023-4004-a073-62eac1e4d96d\") " pod="kube-system/kube-proxy-cvghw" Feb 12 19:17:44.273011 kubelet[2501]: I0212 19:17:44.272866 2501 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/849642ce-03a5-4154-ba6d-ebabf8c717fc-cni-path\") pod \"cilium-qvj6k\" (UID: \"849642ce-03a5-4154-ba6d-ebabf8c717fc\") " pod="kube-system/cilium-qvj6k" Feb 12 19:17:44.273011 kubelet[2501]: I0212 19:17:44.272883 2501 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/849642ce-03a5-4154-ba6d-ebabf8c717fc-cilium-run\") pod \"cilium-qvj6k\" (UID: \"849642ce-03a5-4154-ba6d-ebabf8c717fc\") " pod="kube-system/cilium-qvj6k" Feb 12 19:17:44.273011 kubelet[2501]: I0212 19:17:44.272913 2501 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/849642ce-03a5-4154-ba6d-ebabf8c717fc-cilium-config-path\") pod \"cilium-qvj6k\" (UID: \"849642ce-03a5-4154-ba6d-ebabf8c717fc\") " pod="kube-system/cilium-qvj6k" Feb 12 19:17:44.273011 kubelet[2501]: I0212 19:17:44.272933 2501 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/849642ce-03a5-4154-ba6d-ebabf8c717fc-host-proc-sys-net\") pod \"cilium-qvj6k\" (UID: \"849642ce-03a5-4154-ba6d-ebabf8c717fc\") " pod="kube-system/cilium-qvj6k" Feb 12 19:17:44.273011 kubelet[2501]: I0212 19:17:44.272954 2501 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/849642ce-03a5-4154-ba6d-ebabf8c717fc-xtables-lock\") pod \"cilium-qvj6k\" (UID: \"849642ce-03a5-4154-ba6d-ebabf8c717fc\") " pod="kube-system/cilium-qvj6k" Feb 12 19:17:44.273011 kubelet[2501]: I0212 19:17:44.272984 2501 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qs6m5\" (UniqueName: \"kubernetes.io/projected/d62ea721-a023-4004-a073-62eac1e4d96d-kube-api-access-qs6m5\") pod \"kube-proxy-cvghw\" (UID: \"d62ea721-a023-4004-a073-62eac1e4d96d\") " pod="kube-system/kube-proxy-cvghw" Feb 12 19:17:44.273142 kubelet[2501]: I0212 19:17:44.273006 2501 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/d62ea721-a023-4004-a073-62eac1e4d96d-kube-proxy\") pod \"kube-proxy-cvghw\" (UID: \"d62ea721-a023-4004-a073-62eac1e4d96d\") " pod="kube-system/kube-proxy-cvghw" Feb 12 19:17:44.273142 kubelet[2501]: I0212 19:17:44.273026 2501 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/849642ce-03a5-4154-ba6d-ebabf8c717fc-etc-cni-netd\") pod \"cilium-qvj6k\" (UID: \"849642ce-03a5-4154-ba6d-ebabf8c717fc\") " pod="kube-system/cilium-qvj6k" Feb 12 19:17:44.273142 kubelet[2501]: I0212 19:17:44.273054 2501 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/849642ce-03a5-4154-ba6d-ebabf8c717fc-host-proc-sys-kernel\") pod \"cilium-qvj6k\" (UID: \"849642ce-03a5-4154-ba6d-ebabf8c717fc\") " pod="kube-system/cilium-qvj6k" Feb 12 19:17:44.273142 kubelet[2501]: I0212 19:17:44.273082 2501 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-849kx\" (UniqueName: \"kubernetes.io/projected/849642ce-03a5-4154-ba6d-ebabf8c717fc-kube-api-access-849kx\") pod \"cilium-qvj6k\" (UID: \"849642ce-03a5-4154-ba6d-ebabf8c717fc\") " pod="kube-system/cilium-qvj6k" Feb 12 19:17:44.404599 kubelet[2501]: I0212 19:17:44.404565 2501 topology_manager.go:210] "Topology Admit Handler" Feb 12 19:17:44.409452 systemd[1]: Created slice kubepods-besteffort-podb64db31d_d12a_47be_9728_06a0de150be9.slice. Feb 12 19:17:44.473696 kubelet[2501]: I0212 19:17:44.473665 2501 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rq4w7\" (UniqueName: \"kubernetes.io/projected/b64db31d-d12a-47be-9728-06a0de150be9-kube-api-access-rq4w7\") pod \"cilium-operator-f59cbd8c6-74zhp\" (UID: \"b64db31d-d12a-47be-9728-06a0de150be9\") " pod="kube-system/cilium-operator-f59cbd8c6-74zhp" Feb 12 19:17:44.474116 kubelet[2501]: I0212 19:17:44.474087 2501 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b64db31d-d12a-47be-9728-06a0de150be9-cilium-config-path\") pod \"cilium-operator-f59cbd8c6-74zhp\" (UID: \"b64db31d-d12a-47be-9728-06a0de150be9\") " pod="kube-system/cilium-operator-f59cbd8c6-74zhp" Feb 12 19:17:44.553534 env[1382]: time="2024-02-12T19:17:44.551694393Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-cvghw,Uid:d62ea721-a023-4004-a073-62eac1e4d96d,Namespace:kube-system,Attempt:0,}" Feb 12 19:17:44.589384 env[1382]: time="2024-02-12T19:17:44.589309484Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 12 19:17:44.589384 env[1382]: time="2024-02-12T19:17:44.589347644Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 12 19:17:44.589613 env[1382]: time="2024-02-12T19:17:44.589377844Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 12 19:17:44.589613 env[1382]: time="2024-02-12T19:17:44.589489083Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/77bfec3d442925aee1100d3f6646316f07cce5882d6402efed4f7af79b07fe22 pid=2614 runtime=io.containerd.runc.v2 Feb 12 19:17:44.608696 systemd[1]: Started cri-containerd-77bfec3d442925aee1100d3f6646316f07cce5882d6402efed4f7af79b07fe22.scope. Feb 12 19:17:44.628140 env[1382]: time="2024-02-12T19:17:44.628078974Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-cvghw,Uid:d62ea721-a023-4004-a073-62eac1e4d96d,Namespace:kube-system,Attempt:0,} returns sandbox id \"77bfec3d442925aee1100d3f6646316f07cce5882d6402efed4f7af79b07fe22\"" Feb 12 19:17:44.633348 env[1382]: time="2024-02-12T19:17:44.633305490Z" level=info msg="CreateContainer within sandbox \"77bfec3d442925aee1100d3f6646316f07cce5882d6402efed4f7af79b07fe22\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Feb 12 19:17:44.698060 env[1382]: time="2024-02-12T19:17:44.698010880Z" level=info msg="CreateContainer within sandbox \"77bfec3d442925aee1100d3f6646316f07cce5882d6402efed4f7af79b07fe22\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"c447c4a8c100ed1a8f65e49a0e2b37e7d54a1a44c1596e7ceb5a0fda9033fe1e\"" Feb 12 19:17:44.700249 env[1382]: time="2024-02-12T19:17:44.699659119Z" level=info msg="StartContainer for \"c447c4a8c100ed1a8f65e49a0e2b37e7d54a1a44c1596e7ceb5a0fda9033fe1e\"" Feb 12 19:17:44.717398 systemd[1]: Started cri-containerd-c447c4a8c100ed1a8f65e49a0e2b37e7d54a1a44c1596e7ceb5a0fda9033fe1e.scope. Feb 12 19:17:44.748815 env[1382]: time="2024-02-12T19:17:44.748750081Z" level=info msg="StartContainer for \"c447c4a8c100ed1a8f65e49a0e2b37e7d54a1a44c1596e7ceb5a0fda9033fe1e\" returns successfully" Feb 12 19:17:45.373892 kubelet[2501]: E0212 19:17:45.373856 2501 projected.go:267] Couldn't get secret kube-system/hubble-server-certs: failed to sync secret cache: timed out waiting for the condition Feb 12 19:17:45.374045 kubelet[2501]: E0212 19:17:45.373913 2501 secret.go:194] Couldn't get secret kube-system/cilium-clustermesh: failed to sync secret cache: timed out waiting for the condition Feb 12 19:17:45.374045 kubelet[2501]: E0212 19:17:45.373977 2501 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/849642ce-03a5-4154-ba6d-ebabf8c717fc-clustermesh-secrets podName:849642ce-03a5-4154-ba6d-ebabf8c717fc nodeName:}" failed. No retries permitted until 2024-02-12 19:17:45.873957164 +0000 UTC m=+16.180061613 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "clustermesh-secrets" (UniqueName: "kubernetes.io/secret/849642ce-03a5-4154-ba6d-ebabf8c717fc-clustermesh-secrets") pod "cilium-qvj6k" (UID: "849642ce-03a5-4154-ba6d-ebabf8c717fc") : failed to sync secret cache: timed out waiting for the condition Feb 12 19:17:45.374204 kubelet[2501]: E0212 19:17:45.374186 2501 configmap.go:199] Couldn't get configMap kube-system/cilium-config: failed to sync configmap cache: timed out waiting for the condition Feb 12 19:17:45.374254 kubelet[2501]: E0212 19:17:45.374224 2501 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/849642ce-03a5-4154-ba6d-ebabf8c717fc-cilium-config-path podName:849642ce-03a5-4154-ba6d-ebabf8c717fc nodeName:}" failed. No retries permitted until 2024-02-12 19:17:45.874215364 +0000 UTC m=+16.180319853 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cilium-config-path" (UniqueName: "kubernetes.io/configmap/849642ce-03a5-4154-ba6d-ebabf8c717fc-cilium-config-path") pod "cilium-qvj6k" (UID: "849642ce-03a5-4154-ba6d-ebabf8c717fc") : failed to sync configmap cache: timed out waiting for the condition Feb 12 19:17:45.374254 kubelet[2501]: E0212 19:17:45.373887 2501 projected.go:198] Error preparing data for projected volume hubble-tls for pod kube-system/cilium-qvj6k: failed to sync secret cache: timed out waiting for the condition Feb 12 19:17:45.374332 kubelet[2501]: E0212 19:17:45.374282 2501 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/849642ce-03a5-4154-ba6d-ebabf8c717fc-hubble-tls podName:849642ce-03a5-4154-ba6d-ebabf8c717fc nodeName:}" failed. No retries permitted until 2024-02-12 19:17:45.874273844 +0000 UTC m=+16.180378333 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "hubble-tls" (UniqueName: "kubernetes.io/projected/849642ce-03a5-4154-ba6d-ebabf8c717fc-hubble-tls") pod "cilium-qvj6k" (UID: "849642ce-03a5-4154-ba6d-ebabf8c717fc") : failed to sync secret cache: timed out waiting for the condition Feb 12 19:17:45.400078 systemd[1]: run-containerd-runc-k8s.io-77bfec3d442925aee1100d3f6646316f07cce5882d6402efed4f7af79b07fe22-runc.mUM6ko.mount: Deactivated successfully. Feb 12 19:17:45.612966 env[1382]: time="2024-02-12T19:17:45.612917623Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-f59cbd8c6-74zhp,Uid:b64db31d-d12a-47be-9728-06a0de150be9,Namespace:kube-system,Attempt:0,}" Feb 12 19:17:45.646147 kubelet[2501]: I0212 19:17:45.646041 2501 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-cvghw" podStartSLOduration=1.645987758 pod.CreationTimestamp="2024-02-12 19:17:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-12 19:17:45.645939438 +0000 UTC m=+15.952043927" watchObservedRunningTime="2024-02-12 19:17:45.645987758 +0000 UTC m=+15.952092247" Feb 12 19:17:45.658874 env[1382]: time="2024-02-12T19:17:45.658808308Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 12 19:17:45.659041 env[1382]: time="2024-02-12T19:17:45.658849308Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 12 19:17:45.659041 env[1382]: time="2024-02-12T19:17:45.658859708Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 12 19:17:45.659874 env[1382]: time="2024-02-12T19:17:45.659060388Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/0e18c8286a555e33d3bc14da3455609ded3bea1f2a25ec3a59cf48dd339a128c pid=2799 runtime=io.containerd.runc.v2 Feb 12 19:17:45.673630 systemd[1]: Started cri-containerd-0e18c8286a555e33d3bc14da3455609ded3bea1f2a25ec3a59cf48dd339a128c.scope. Feb 12 19:17:45.676647 systemd[1]: run-containerd-runc-k8s.io-0e18c8286a555e33d3bc14da3455609ded3bea1f2a25ec3a59cf48dd339a128c-runc.9jK1AR.mount: Deactivated successfully. Feb 12 19:17:45.711114 env[1382]: time="2024-02-12T19:17:45.711074709Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-f59cbd8c6-74zhp,Uid:b64db31d-d12a-47be-9728-06a0de150be9,Namespace:kube-system,Attempt:0,} returns sandbox id \"0e18c8286a555e33d3bc14da3455609ded3bea1f2a25ec3a59cf48dd339a128c\"" Feb 12 19:17:45.713532 env[1382]: time="2024-02-12T19:17:45.712928107Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Feb 12 19:17:46.059385 env[1382]: time="2024-02-12T19:17:46.059343165Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-qvj6k,Uid:849642ce-03a5-4154-ba6d-ebabf8c717fc,Namespace:kube-system,Attempt:0,}" Feb 12 19:17:46.169468 env[1382]: time="2024-02-12T19:17:46.169397803Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 12 19:17:46.169468 env[1382]: time="2024-02-12T19:17:46.169436843Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 12 19:17:46.169468 env[1382]: time="2024-02-12T19:17:46.169446963Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 12 19:17:46.169815 env[1382]: time="2024-02-12T19:17:46.169772643Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/e93b5e626098f57bfe9c03a0798728af6655160c5539164774fedf5a44e5ff1b pid=2842 runtime=io.containerd.runc.v2 Feb 12 19:17:46.180446 systemd[1]: Started cri-containerd-e93b5e626098f57bfe9c03a0798728af6655160c5539164774fedf5a44e5ff1b.scope. Feb 12 19:17:46.202373 env[1382]: time="2024-02-12T19:17:46.202141979Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-qvj6k,Uid:849642ce-03a5-4154-ba6d-ebabf8c717fc,Namespace:kube-system,Attempt:0,} returns sandbox id \"e93b5e626098f57bfe9c03a0798728af6655160c5539164774fedf5a44e5ff1b\"" Feb 12 19:17:47.949390 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount868675911.mount: Deactivated successfully. Feb 12 19:17:49.205406 env[1382]: time="2024-02-12T19:17:49.205365733Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:17:49.211644 env[1382]: time="2024-02-12T19:17:49.211615129Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:17:49.216985 env[1382]: time="2024-02-12T19:17:49.216958005Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:17:49.217694 env[1382]: time="2024-02-12T19:17:49.217668324Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Feb 12 19:17:49.219702 env[1382]: time="2024-02-12T19:17:49.219365643Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Feb 12 19:17:49.220483 env[1382]: time="2024-02-12T19:17:49.220452162Z" level=info msg="CreateContainer within sandbox \"0e18c8286a555e33d3bc14da3455609ded3bea1f2a25ec3a59cf48dd339a128c\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Feb 12 19:17:49.252194 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1177466103.mount: Deactivated successfully. Feb 12 19:17:49.280074 env[1382]: time="2024-02-12T19:17:49.280024880Z" level=info msg="CreateContainer within sandbox \"0e18c8286a555e33d3bc14da3455609ded3bea1f2a25ec3a59cf48dd339a128c\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"3d9e33bf68881525a4518e699c60a024949580daf487c058f66d0b1719d130a4\"" Feb 12 19:17:49.280602 env[1382]: time="2024-02-12T19:17:49.280578359Z" level=info msg="StartContainer for \"3d9e33bf68881525a4518e699c60a024949580daf487c058f66d0b1719d130a4\"" Feb 12 19:17:49.297713 systemd[1]: Started cri-containerd-3d9e33bf68881525a4518e699c60a024949580daf487c058f66d0b1719d130a4.scope. Feb 12 19:17:49.330216 env[1382]: time="2024-02-12T19:17:49.330165004Z" level=info msg="StartContainer for \"3d9e33bf68881525a4518e699c60a024949580daf487c058f66d0b1719d130a4\" returns successfully" Feb 12 19:17:54.036287 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount830598879.mount: Deactivated successfully. Feb 12 19:17:56.762818 env[1382]: time="2024-02-12T19:17:56.762773042Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:17:56.773616 env[1382]: time="2024-02-12T19:17:56.773570675Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:17:56.778067 env[1382]: time="2024-02-12T19:17:56.778026912Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:17:56.778769 env[1382]: time="2024-02-12T19:17:56.778741871Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Feb 12 19:17:56.783525 env[1382]: time="2024-02-12T19:17:56.783473948Z" level=info msg="CreateContainer within sandbox \"e93b5e626098f57bfe9c03a0798728af6655160c5539164774fedf5a44e5ff1b\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 12 19:17:56.814687 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount831350510.mount: Deactivated successfully. Feb 12 19:17:56.820317 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount19106318.mount: Deactivated successfully. Feb 12 19:17:56.847640 env[1382]: time="2024-02-12T19:17:56.847588506Z" level=info msg="CreateContainer within sandbox \"e93b5e626098f57bfe9c03a0798728af6655160c5539164774fedf5a44e5ff1b\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"82fb86ce8854b2d32b92a570f5239f777aeeaca0f9b7a92ff8b8786600cf0f41\"" Feb 12 19:17:56.848224 env[1382]: time="2024-02-12T19:17:56.848181186Z" level=info msg="StartContainer for \"82fb86ce8854b2d32b92a570f5239f777aeeaca0f9b7a92ff8b8786600cf0f41\"" Feb 12 19:17:56.866456 systemd[1]: Started cri-containerd-82fb86ce8854b2d32b92a570f5239f777aeeaca0f9b7a92ff8b8786600cf0f41.scope. Feb 12 19:17:56.900058 env[1382]: time="2024-02-12T19:17:56.900020152Z" level=info msg="StartContainer for \"82fb86ce8854b2d32b92a570f5239f777aeeaca0f9b7a92ff8b8786600cf0f41\" returns successfully" Feb 12 19:17:56.904135 systemd[1]: cri-containerd-82fb86ce8854b2d32b92a570f5239f777aeeaca0f9b7a92ff8b8786600cf0f41.scope: Deactivated successfully. Feb 12 19:17:56.946669 kubelet[2501]: I0212 19:17:56.946622 2501 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-operator-f59cbd8c6-74zhp" podStartSLOduration=-9.223372023908192e+09 pod.CreationTimestamp="2024-02-12 19:17:44 +0000 UTC" firstStartedPulling="2024-02-12 19:17:45.712420428 +0000 UTC m=+16.018524917" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-12 19:17:49.948181041 +0000 UTC m=+20.254285530" watchObservedRunningTime="2024-02-12 19:17:56.946583962 +0000 UTC m=+27.252688451" Feb 12 19:17:57.812964 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-82fb86ce8854b2d32b92a570f5239f777aeeaca0f9b7a92ff8b8786600cf0f41-rootfs.mount: Deactivated successfully. Feb 12 19:17:58.160400 env[1382]: time="2024-02-12T19:17:58.160077778Z" level=info msg="shim disconnected" id=82fb86ce8854b2d32b92a570f5239f777aeeaca0f9b7a92ff8b8786600cf0f41 Feb 12 19:17:58.160766 env[1382]: time="2024-02-12T19:17:58.160742217Z" level=warning msg="cleaning up after shim disconnected" id=82fb86ce8854b2d32b92a570f5239f777aeeaca0f9b7a92ff8b8786600cf0f41 namespace=k8s.io Feb 12 19:17:58.160828 env[1382]: time="2024-02-12T19:17:58.160816257Z" level=info msg="cleaning up dead shim" Feb 12 19:17:58.167814 env[1382]: time="2024-02-12T19:17:58.167765173Z" level=warning msg="cleanup warnings time=\"2024-02-12T19:17:58Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2963 runtime=io.containerd.runc.v2\n" Feb 12 19:17:58.936756 env[1382]: time="2024-02-12T19:17:58.936715441Z" level=info msg="CreateContainer within sandbox \"e93b5e626098f57bfe9c03a0798728af6655160c5539164774fedf5a44e5ff1b\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 12 19:17:58.980216 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1243659051.mount: Deactivated successfully. Feb 12 19:17:59.000288 env[1382]: time="2024-02-12T19:17:59.000241281Z" level=info msg="CreateContainer within sandbox \"e93b5e626098f57bfe9c03a0798728af6655160c5539164774fedf5a44e5ff1b\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"5f1361e5c74f9cb0836f45dc31d4fe0ac05f914a72429643636fbecf32e53fbc\"" Feb 12 19:17:59.001013 env[1382]: time="2024-02-12T19:17:59.000922960Z" level=info msg="StartContainer for \"5f1361e5c74f9cb0836f45dc31d4fe0ac05f914a72429643636fbecf32e53fbc\"" Feb 12 19:17:59.018719 systemd[1]: Started cri-containerd-5f1361e5c74f9cb0836f45dc31d4fe0ac05f914a72429643636fbecf32e53fbc.scope. Feb 12 19:17:59.055223 env[1382]: time="2024-02-12T19:17:59.055178486Z" level=info msg="StartContainer for \"5f1361e5c74f9cb0836f45dc31d4fe0ac05f914a72429643636fbecf32e53fbc\" returns successfully" Feb 12 19:17:59.066889 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 12 19:17:59.067092 systemd[1]: Stopped systemd-sysctl.service. Feb 12 19:17:59.067657 systemd[1]: Stopping systemd-sysctl.service... Feb 12 19:17:59.070270 systemd[1]: Starting systemd-sysctl.service... Feb 12 19:17:59.077386 systemd[1]: Finished systemd-sysctl.service. Feb 12 19:17:59.078885 systemd[1]: cri-containerd-5f1361e5c74f9cb0836f45dc31d4fe0ac05f914a72429643636fbecf32e53fbc.scope: Deactivated successfully. Feb 12 19:17:59.123374 env[1382]: time="2024-02-12T19:17:59.123329843Z" level=info msg="shim disconnected" id=5f1361e5c74f9cb0836f45dc31d4fe0ac05f914a72429643636fbecf32e53fbc Feb 12 19:17:59.123693 env[1382]: time="2024-02-12T19:17:59.123673203Z" level=warning msg="cleaning up after shim disconnected" id=5f1361e5c74f9cb0836f45dc31d4fe0ac05f914a72429643636fbecf32e53fbc namespace=k8s.io Feb 12 19:17:59.123770 env[1382]: time="2024-02-12T19:17:59.123758243Z" level=info msg="cleaning up dead shim" Feb 12 19:17:59.131283 env[1382]: time="2024-02-12T19:17:59.131242358Z" level=warning msg="cleanup warnings time=\"2024-02-12T19:17:59Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3027 runtime=io.containerd.runc.v2\n" Feb 12 19:17:59.940004 env[1382]: time="2024-02-12T19:17:59.939966127Z" level=info msg="CreateContainer within sandbox \"e93b5e626098f57bfe9c03a0798728af6655160c5539164774fedf5a44e5ff1b\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 12 19:17:59.977784 systemd[1]: run-containerd-runc-k8s.io-5f1361e5c74f9cb0836f45dc31d4fe0ac05f914a72429643636fbecf32e53fbc-runc.6rMvu8.mount: Deactivated successfully. Feb 12 19:17:59.977901 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5f1361e5c74f9cb0836f45dc31d4fe0ac05f914a72429643636fbecf32e53fbc-rootfs.mount: Deactivated successfully. Feb 12 19:18:00.025622 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2574175260.mount: Deactivated successfully. Feb 12 19:18:00.086367 env[1382]: time="2024-02-12T19:18:00.086317555Z" level=info msg="CreateContainer within sandbox \"e93b5e626098f57bfe9c03a0798728af6655160c5539164774fedf5a44e5ff1b\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"8e3a44b19f7aa02d84927f91ec1731ac66ea59183bca6765bab989d42f423db8\"" Feb 12 19:18:00.087078 env[1382]: time="2024-02-12T19:18:00.087044955Z" level=info msg="StartContainer for \"8e3a44b19f7aa02d84927f91ec1731ac66ea59183bca6765bab989d42f423db8\"" Feb 12 19:18:00.111573 systemd[1]: Started cri-containerd-8e3a44b19f7aa02d84927f91ec1731ac66ea59183bca6765bab989d42f423db8.scope. Feb 12 19:18:00.140624 systemd[1]: cri-containerd-8e3a44b19f7aa02d84927f91ec1731ac66ea59183bca6765bab989d42f423db8.scope: Deactivated successfully. Feb 12 19:18:00.152473 env[1382]: time="2024-02-12T19:18:00.152364754Z" level=info msg="StartContainer for \"8e3a44b19f7aa02d84927f91ec1731ac66ea59183bca6765bab989d42f423db8\" returns successfully" Feb 12 19:18:00.191719 env[1382]: time="2024-02-12T19:18:00.191110730Z" level=info msg="shim disconnected" id=8e3a44b19f7aa02d84927f91ec1731ac66ea59183bca6765bab989d42f423db8 Feb 12 19:18:00.191719 env[1382]: time="2024-02-12T19:18:00.191160530Z" level=warning msg="cleaning up after shim disconnected" id=8e3a44b19f7aa02d84927f91ec1731ac66ea59183bca6765bab989d42f423db8 namespace=k8s.io Feb 12 19:18:00.191719 env[1382]: time="2024-02-12T19:18:00.191172050Z" level=info msg="cleaning up dead shim" Feb 12 19:18:00.197685 env[1382]: time="2024-02-12T19:18:00.197636766Z" level=warning msg="cleanup warnings time=\"2024-02-12T19:18:00Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3085 runtime=io.containerd.runc.v2\n" Feb 12 19:18:00.942886 env[1382]: time="2024-02-12T19:18:00.942842060Z" level=info msg="CreateContainer within sandbox \"e93b5e626098f57bfe9c03a0798728af6655160c5539164774fedf5a44e5ff1b\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 12 19:18:00.977764 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8e3a44b19f7aa02d84927f91ec1731ac66ea59183bca6765bab989d42f423db8-rootfs.mount: Deactivated successfully. Feb 12 19:18:00.989654 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1848081705.mount: Deactivated successfully. Feb 12 19:18:01.045691 env[1382]: time="2024-02-12T19:18:01.045634396Z" level=info msg="CreateContainer within sandbox \"e93b5e626098f57bfe9c03a0798728af6655160c5539164774fedf5a44e5ff1b\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"4a57f932d970a5c2bee2b8a463d45bb25d06cf2cfbaf77f23b4f0a376b5e5b42\"" Feb 12 19:18:01.046621 env[1382]: time="2024-02-12T19:18:01.046577356Z" level=info msg="StartContainer for \"4a57f932d970a5c2bee2b8a463d45bb25d06cf2cfbaf77f23b4f0a376b5e5b42\"" Feb 12 19:18:01.065028 systemd[1]: Started cri-containerd-4a57f932d970a5c2bee2b8a463d45bb25d06cf2cfbaf77f23b4f0a376b5e5b42.scope. Feb 12 19:18:01.090555 systemd[1]: cri-containerd-4a57f932d970a5c2bee2b8a463d45bb25d06cf2cfbaf77f23b4f0a376b5e5b42.scope: Deactivated successfully. Feb 12 19:18:01.091899 env[1382]: time="2024-02-12T19:18:01.091839928Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod849642ce_03a5_4154_ba6d_ebabf8c717fc.slice/cri-containerd-4a57f932d970a5c2bee2b8a463d45bb25d06cf2cfbaf77f23b4f0a376b5e5b42.scope/memory.events\": no such file or directory" Feb 12 19:18:01.112844 env[1382]: time="2024-02-12T19:18:01.112791715Z" level=info msg="StartContainer for \"4a57f932d970a5c2bee2b8a463d45bb25d06cf2cfbaf77f23b4f0a376b5e5b42\" returns successfully" Feb 12 19:18:01.158069 env[1382]: time="2024-02-12T19:18:01.158023967Z" level=info msg="shim disconnected" id=4a57f932d970a5c2bee2b8a463d45bb25d06cf2cfbaf77f23b4f0a376b5e5b42 Feb 12 19:18:01.158295 env[1382]: time="2024-02-12T19:18:01.158278727Z" level=warning msg="cleaning up after shim disconnected" id=4a57f932d970a5c2bee2b8a463d45bb25d06cf2cfbaf77f23b4f0a376b5e5b42 namespace=k8s.io Feb 12 19:18:01.158432 env[1382]: time="2024-02-12T19:18:01.158409087Z" level=info msg="cleaning up dead shim" Feb 12 19:18:01.165166 env[1382]: time="2024-02-12T19:18:01.165124323Z" level=warning msg="cleanup warnings time=\"2024-02-12T19:18:01Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3141 runtime=io.containerd.runc.v2\n" Feb 12 19:18:01.945279 env[1382]: time="2024-02-12T19:18:01.945229401Z" level=info msg="CreateContainer within sandbox \"e93b5e626098f57bfe9c03a0798728af6655160c5539164774fedf5a44e5ff1b\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 12 19:18:01.995077 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount834196798.mount: Deactivated successfully. Feb 12 19:18:02.014797 env[1382]: time="2024-02-12T19:18:02.014730918Z" level=info msg="CreateContainer within sandbox \"e93b5e626098f57bfe9c03a0798728af6655160c5539164774fedf5a44e5ff1b\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"38cd8ee0376c042b30ad886fa0ba5335f72c7de918ea6b0588e8c967c33bedcc\"" Feb 12 19:18:02.015571 env[1382]: time="2024-02-12T19:18:02.015543117Z" level=info msg="StartContainer for \"38cd8ee0376c042b30ad886fa0ba5335f72c7de918ea6b0588e8c967c33bedcc\"" Feb 12 19:18:02.036683 systemd[1]: Started cri-containerd-38cd8ee0376c042b30ad886fa0ba5335f72c7de918ea6b0588e8c967c33bedcc.scope. Feb 12 19:18:02.097538 env[1382]: time="2024-02-12T19:18:02.097471307Z" level=info msg="StartContainer for \"38cd8ee0376c042b30ad886fa0ba5335f72c7de918ea6b0588e8c967c33bedcc\" returns successfully" Feb 12 19:18:02.181535 kernel: WARNING: Unprivileged eBPF is enabled, data leaks possible via Spectre v2 BHB attacks! Feb 12 19:18:02.228760 kubelet[2501]: I0212 19:18:02.227121 2501 kubelet_node_status.go:493] "Fast updating node status as it just became ready" Feb 12 19:18:02.247142 kubelet[2501]: I0212 19:18:02.246431 2501 topology_manager.go:210] "Topology Admit Handler" Feb 12 19:18:02.251021 kubelet[2501]: I0212 19:18:02.250986 2501 topology_manager.go:210] "Topology Admit Handler" Feb 12 19:18:02.252015 systemd[1]: Created slice kubepods-burstable-podf16aae47_36df_4b20_bf9c_41b8494bf3a0.slice. Feb 12 19:18:02.257434 systemd[1]: Created slice kubepods-burstable-pod438f0a09_7fe0_4442_bca8_2c7b885efcd7.slice. Feb 12 19:18:02.375922 kubelet[2501]: I0212 19:18:02.375881 2501 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-srw24\" (UniqueName: \"kubernetes.io/projected/f16aae47-36df-4b20-bf9c-41b8494bf3a0-kube-api-access-srw24\") pod \"coredns-787d4945fb-r5sdj\" (UID: \"f16aae47-36df-4b20-bf9c-41b8494bf3a0\") " pod="kube-system/coredns-787d4945fb-r5sdj" Feb 12 19:18:02.376129 kubelet[2501]: I0212 19:18:02.376115 2501 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/438f0a09-7fe0-4442-bca8-2c7b885efcd7-config-volume\") pod \"coredns-787d4945fb-sbhzb\" (UID: \"438f0a09-7fe0-4442-bca8-2c7b885efcd7\") " pod="kube-system/coredns-787d4945fb-sbhzb" Feb 12 19:18:02.376236 kubelet[2501]: I0212 19:18:02.376224 2501 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-26f7p\" (UniqueName: \"kubernetes.io/projected/438f0a09-7fe0-4442-bca8-2c7b885efcd7-kube-api-access-26f7p\") pod \"coredns-787d4945fb-sbhzb\" (UID: \"438f0a09-7fe0-4442-bca8-2c7b885efcd7\") " pod="kube-system/coredns-787d4945fb-sbhzb" Feb 12 19:18:02.376338 kubelet[2501]: I0212 19:18:02.376327 2501 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f16aae47-36df-4b20-bf9c-41b8494bf3a0-config-volume\") pod \"coredns-787d4945fb-r5sdj\" (UID: \"f16aae47-36df-4b20-bf9c-41b8494bf3a0\") " pod="kube-system/coredns-787d4945fb-r5sdj" Feb 12 19:18:02.556778 env[1382]: time="2024-02-12T19:18:02.556302827Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-787d4945fb-r5sdj,Uid:f16aae47-36df-4b20-bf9c-41b8494bf3a0,Namespace:kube-system,Attempt:0,}" Feb 12 19:18:02.557524 kernel: WARNING: Unprivileged eBPF is enabled, data leaks possible via Spectre v2 BHB attacks! Feb 12 19:18:02.560434 env[1382]: time="2024-02-12T19:18:02.560335544Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-787d4945fb-sbhzb,Uid:438f0a09-7fe0-4442-bca8-2c7b885efcd7,Namespace:kube-system,Attempt:0,}" Feb 12 19:18:02.962298 kubelet[2501]: I0212 19:18:02.960928 2501 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-qvj6k" podStartSLOduration=-9.223372017893885e+09 pod.CreationTimestamp="2024-02-12 19:17:44 +0000 UTC" firstStartedPulling="2024-02-12 19:17:46.203956937 +0000 UTC m=+16.510061426" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-12 19:18:02.959118661 +0000 UTC m=+33.265223190" watchObservedRunningTime="2024-02-12 19:18:02.960892059 +0000 UTC m=+33.266996548" Feb 12 19:18:04.241544 systemd-networkd[1527]: cilium_host: Link UP Feb 12 19:18:04.242000 systemd-networkd[1527]: cilium_net: Link UP Feb 12 19:18:04.260832 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_net: link becomes ready Feb 12 19:18:04.260949 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready Feb 12 19:18:04.263253 systemd-networkd[1527]: cilium_net: Gained carrier Feb 12 19:18:04.263454 systemd-networkd[1527]: cilium_host: Gained carrier Feb 12 19:18:04.284575 systemd-networkd[1527]: cilium_net: Gained IPv6LL Feb 12 19:18:04.434011 systemd-networkd[1527]: cilium_vxlan: Link UP Feb 12 19:18:04.434018 systemd-networkd[1527]: cilium_vxlan: Gained carrier Feb 12 19:18:04.757533 kernel: NET: Registered PF_ALG protocol family Feb 12 19:18:04.819691 systemd-networkd[1527]: cilium_host: Gained IPv6LL Feb 12 19:18:05.423787 systemd-networkd[1527]: lxc_health: Link UP Feb 12 19:18:05.437663 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Feb 12 19:18:05.437971 systemd-networkd[1527]: lxc_health: Gained carrier Feb 12 19:18:05.651637 systemd-networkd[1527]: cilium_vxlan: Gained IPv6LL Feb 12 19:18:05.705314 systemd-networkd[1527]: lxc644e0b1335f1: Link UP Feb 12 19:18:05.715550 kernel: eth0: renamed from tmp1747c Feb 12 19:18:05.730974 systemd-networkd[1527]: lxc644e0b1335f1: Gained carrier Feb 12 19:18:05.731580 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc644e0b1335f1: link becomes ready Feb 12 19:18:05.733248 systemd-networkd[1527]: lxcbcc354638753: Link UP Feb 12 19:18:05.747021 kernel: eth0: renamed from tmpc17c7 Feb 12 19:18:05.759752 systemd-networkd[1527]: lxcbcc354638753: Gained carrier Feb 12 19:18:05.760573 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxcbcc354638753: link becomes ready Feb 12 19:18:06.996612 systemd-networkd[1527]: lxc_health: Gained IPv6LL Feb 12 19:18:07.187656 systemd-networkd[1527]: lxc644e0b1335f1: Gained IPv6LL Feb 12 19:18:07.635650 systemd-networkd[1527]: lxcbcc354638753: Gained IPv6LL Feb 12 19:18:09.480332 env[1382]: time="2024-02-12T19:18:09.475661745Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 12 19:18:09.480332 env[1382]: time="2024-02-12T19:18:09.475714425Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 12 19:18:09.480332 env[1382]: time="2024-02-12T19:18:09.475724385Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 12 19:18:09.480332 env[1382]: time="2024-02-12T19:18:09.475832745Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/c17c7e963a9dc4ea9627081b77b0228b907e09345dc12e8e43ea8be05f47b573 pid=3699 runtime=io.containerd.runc.v2 Feb 12 19:18:09.491180 env[1382]: time="2024-02-12T19:18:09.490760176Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 12 19:18:09.491180 env[1382]: time="2024-02-12T19:18:09.490804096Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 12 19:18:09.491180 env[1382]: time="2024-02-12T19:18:09.490814616Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 12 19:18:09.491180 env[1382]: time="2024-02-12T19:18:09.490951536Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/1747c7ab06618ec727163627d6f47ddf2d7864e568af681e815a303ee050e247 pid=3720 runtime=io.containerd.runc.v2 Feb 12 19:18:09.497331 systemd[1]: run-containerd-runc-k8s.io-c17c7e963a9dc4ea9627081b77b0228b907e09345dc12e8e43ea8be05f47b573-runc.0hyqsV.mount: Deactivated successfully. Feb 12 19:18:09.506021 systemd[1]: Started cri-containerd-c17c7e963a9dc4ea9627081b77b0228b907e09345dc12e8e43ea8be05f47b573.scope. Feb 12 19:18:09.522851 systemd[1]: Started cri-containerd-1747c7ab06618ec727163627d6f47ddf2d7864e568af681e815a303ee050e247.scope. Feb 12 19:18:09.562563 env[1382]: time="2024-02-12T19:18:09.562517575Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-787d4945fb-r5sdj,Uid:f16aae47-36df-4b20-bf9c-41b8494bf3a0,Namespace:kube-system,Attempt:0,} returns sandbox id \"1747c7ab06618ec727163627d6f47ddf2d7864e568af681e815a303ee050e247\"" Feb 12 19:18:09.565257 env[1382]: time="2024-02-12T19:18:09.565217694Z" level=info msg="CreateContainer within sandbox \"1747c7ab06618ec727163627d6f47ddf2d7864e568af681e815a303ee050e247\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 12 19:18:09.592930 env[1382]: time="2024-02-12T19:18:09.592878518Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-787d4945fb-sbhzb,Uid:438f0a09-7fe0-4442-bca8-2c7b885efcd7,Namespace:kube-system,Attempt:0,} returns sandbox id \"c17c7e963a9dc4ea9627081b77b0228b907e09345dc12e8e43ea8be05f47b573\"" Feb 12 19:18:09.601882 env[1382]: time="2024-02-12T19:18:09.601824113Z" level=info msg="CreateContainer within sandbox \"c17c7e963a9dc4ea9627081b77b0228b907e09345dc12e8e43ea8be05f47b573\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 12 19:18:09.652319 env[1382]: time="2024-02-12T19:18:09.652270524Z" level=info msg="CreateContainer within sandbox \"1747c7ab06618ec727163627d6f47ddf2d7864e568af681e815a303ee050e247\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"c81a672ae87ba12bed5a27bf3bf330270b4ad37e5067cf92e10000dfbda85ab3\"" Feb 12 19:18:09.653281 env[1382]: time="2024-02-12T19:18:09.653232563Z" level=info msg="StartContainer for \"c81a672ae87ba12bed5a27bf3bf330270b4ad37e5067cf92e10000dfbda85ab3\"" Feb 12 19:18:09.677403 systemd[1]: Started cri-containerd-c81a672ae87ba12bed5a27bf3bf330270b4ad37e5067cf92e10000dfbda85ab3.scope. Feb 12 19:18:09.723370 env[1382]: time="2024-02-12T19:18:09.722399964Z" level=info msg="CreateContainer within sandbox \"c17c7e963a9dc4ea9627081b77b0228b907e09345dc12e8e43ea8be05f47b573\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"52719722c040e135918bc9eb284af6c6756ce9a3bd7694afb616472ea9c1e6b5\"" Feb 12 19:18:09.723722 env[1382]: time="2024-02-12T19:18:09.723696283Z" level=info msg="StartContainer for \"52719722c040e135918bc9eb284af6c6756ce9a3bd7694afb616472ea9c1e6b5\"" Feb 12 19:18:09.730055 env[1382]: time="2024-02-12T19:18:09.730001280Z" level=info msg="StartContainer for \"c81a672ae87ba12bed5a27bf3bf330270b4ad37e5067cf92e10000dfbda85ab3\" returns successfully" Feb 12 19:18:09.745840 systemd[1]: Started cri-containerd-52719722c040e135918bc9eb284af6c6756ce9a3bd7694afb616472ea9c1e6b5.scope. Feb 12 19:18:09.796822 env[1382]: time="2024-02-12T19:18:09.796759642Z" level=info msg="StartContainer for \"52719722c040e135918bc9eb284af6c6756ce9a3bd7694afb616472ea9c1e6b5\" returns successfully" Feb 12 19:18:10.019312 kubelet[2501]: I0212 19:18:10.019193 2501 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-787d4945fb-r5sdj" podStartSLOduration=26.019157835 pod.CreationTimestamp="2024-02-12 19:17:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-12 19:18:10.017208876 +0000 UTC m=+40.323313365" watchObservedRunningTime="2024-02-12 19:18:10.019157835 +0000 UTC m=+40.325262324" Feb 12 19:18:10.019647 kubelet[2501]: I0212 19:18:10.019317 2501 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-787d4945fb-sbhzb" podStartSLOduration=26.019298995 pod.CreationTimestamp="2024-02-12 19:17:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-12 19:18:09.971765222 +0000 UTC m=+40.277869711" watchObservedRunningTime="2024-02-12 19:18:10.019298995 +0000 UTC m=+40.325403484" Feb 12 19:20:38.718723 systemd[1]: Started sshd@5-10.200.20.31:22-10.200.12.6:55462.service. Feb 12 19:20:39.120675 sshd[3927]: Accepted publickey for core from 10.200.12.6 port 55462 ssh2: RSA SHA256:SfUjs7MZHb4dm/nWS32sXF6T7NXqXtYp+5K7LVbLt6U Feb 12 19:20:39.122334 sshd[3927]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:20:39.127624 systemd[1]: Started session-8.scope. Feb 12 19:20:39.129022 systemd-logind[1367]: New session 8 of user core. Feb 12 19:20:39.572759 sshd[3927]: pam_unix(sshd:session): session closed for user core Feb 12 19:20:39.575191 systemd[1]: sshd@5-10.200.20.31:22-10.200.12.6:55462.service: Deactivated successfully. Feb 12 19:20:39.575981 systemd[1]: session-8.scope: Deactivated successfully. Feb 12 19:20:39.576601 systemd-logind[1367]: Session 8 logged out. Waiting for processes to exit. Feb 12 19:20:39.577405 systemd-logind[1367]: Removed session 8. Feb 12 19:20:44.645400 systemd[1]: Started sshd@6-10.200.20.31:22-10.200.12.6:55464.service. Feb 12 19:20:45.074137 sshd[3943]: Accepted publickey for core from 10.200.12.6 port 55464 ssh2: RSA SHA256:SfUjs7MZHb4dm/nWS32sXF6T7NXqXtYp+5K7LVbLt6U Feb 12 19:20:45.075819 sshd[3943]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:20:45.080424 systemd[1]: Started session-9.scope. Feb 12 19:20:45.080752 systemd-logind[1367]: New session 9 of user core. Feb 12 19:20:45.449429 sshd[3943]: pam_unix(sshd:session): session closed for user core Feb 12 19:20:45.452262 systemd-logind[1367]: Session 9 logged out. Waiting for processes to exit. Feb 12 19:20:45.452347 systemd[1]: session-9.scope: Deactivated successfully. Feb 12 19:20:45.453065 systemd[1]: sshd@6-10.200.20.31:22-10.200.12.6:55464.service: Deactivated successfully. Feb 12 19:20:45.454126 systemd-logind[1367]: Removed session 9. Feb 12 19:20:50.523086 systemd[1]: Started sshd@7-10.200.20.31:22-10.200.12.6:54704.service. Feb 12 19:20:50.952487 sshd[3959]: Accepted publickey for core from 10.200.12.6 port 54704 ssh2: RSA SHA256:SfUjs7MZHb4dm/nWS32sXF6T7NXqXtYp+5K7LVbLt6U Feb 12 19:20:50.953746 sshd[3959]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:20:50.957321 systemd-logind[1367]: New session 10 of user core. Feb 12 19:20:50.960052 systemd[1]: Started session-10.scope. Feb 12 19:20:51.322919 sshd[3959]: pam_unix(sshd:session): session closed for user core Feb 12 19:20:51.325728 systemd[1]: sshd@7-10.200.20.31:22-10.200.12.6:54704.service: Deactivated successfully. Feb 12 19:20:51.326525 systemd[1]: session-10.scope: Deactivated successfully. Feb 12 19:20:51.327174 systemd-logind[1367]: Session 10 logged out. Waiting for processes to exit. Feb 12 19:20:51.328179 systemd-logind[1367]: Removed session 10. Feb 12 19:20:56.390781 systemd[1]: Started sshd@8-10.200.20.31:22-10.200.12.6:54708.service. Feb 12 19:20:56.788882 sshd[3975]: Accepted publickey for core from 10.200.12.6 port 54708 ssh2: RSA SHA256:SfUjs7MZHb4dm/nWS32sXF6T7NXqXtYp+5K7LVbLt6U Feb 12 19:20:56.790605 sshd[3975]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:20:56.794799 systemd[1]: Started session-11.scope. Feb 12 19:20:56.795096 systemd-logind[1367]: New session 11 of user core. Feb 12 19:20:57.149959 sshd[3975]: pam_unix(sshd:session): session closed for user core Feb 12 19:20:57.153060 systemd[1]: sshd@8-10.200.20.31:22-10.200.12.6:54708.service: Deactivated successfully. Feb 12 19:20:57.153863 systemd[1]: session-11.scope: Deactivated successfully. Feb 12 19:20:57.154427 systemd-logind[1367]: Session 11 logged out. Waiting for processes to exit. Feb 12 19:20:57.155146 systemd-logind[1367]: Removed session 11. Feb 12 19:21:02.218463 systemd[1]: Started sshd@9-10.200.20.31:22-10.200.12.6:36536.service. Feb 12 19:21:02.615942 sshd[3988]: Accepted publickey for core from 10.200.12.6 port 36536 ssh2: RSA SHA256:SfUjs7MZHb4dm/nWS32sXF6T7NXqXtYp+5K7LVbLt6U Feb 12 19:21:02.617200 sshd[3988]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:21:02.621093 systemd-logind[1367]: New session 12 of user core. Feb 12 19:21:02.621614 systemd[1]: Started session-12.scope. Feb 12 19:21:02.964382 sshd[3988]: pam_unix(sshd:session): session closed for user core Feb 12 19:21:02.967310 systemd-logind[1367]: Session 12 logged out. Waiting for processes to exit. Feb 12 19:21:02.967398 systemd[1]: session-12.scope: Deactivated successfully. Feb 12 19:21:02.968042 systemd[1]: sshd@9-10.200.20.31:22-10.200.12.6:36536.service: Deactivated successfully. Feb 12 19:21:02.969104 systemd-logind[1367]: Removed session 12. Feb 12 19:21:03.031218 systemd[1]: Started sshd@10-10.200.20.31:22-10.200.12.6:36538.service. Feb 12 19:21:03.426678 sshd[4001]: Accepted publickey for core from 10.200.12.6 port 36538 ssh2: RSA SHA256:SfUjs7MZHb4dm/nWS32sXF6T7NXqXtYp+5K7LVbLt6U Feb 12 19:21:03.428258 sshd[4001]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:21:03.432687 systemd[1]: Started session-13.scope. Feb 12 19:21:03.433662 systemd-logind[1367]: New session 13 of user core. Feb 12 19:21:04.530294 sshd[4001]: pam_unix(sshd:session): session closed for user core Feb 12 19:21:04.533203 systemd[1]: sshd@10-10.200.20.31:22-10.200.12.6:36538.service: Deactivated successfully. Feb 12 19:21:04.534009 systemd[1]: session-13.scope: Deactivated successfully. Feb 12 19:21:04.534593 systemd-logind[1367]: Session 13 logged out. Waiting for processes to exit. Feb 12 19:21:04.535321 systemd-logind[1367]: Removed session 13. Feb 12 19:21:04.604351 systemd[1]: Started sshd@11-10.200.20.31:22-10.200.12.6:36548.service. Feb 12 19:21:05.032981 sshd[4011]: Accepted publickey for core from 10.200.12.6 port 36548 ssh2: RSA SHA256:SfUjs7MZHb4dm/nWS32sXF6T7NXqXtYp+5K7LVbLt6U Feb 12 19:21:05.034575 sshd[4011]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:21:05.038759 systemd[1]: Started session-14.scope. Feb 12 19:21:05.039087 systemd-logind[1367]: New session 14 of user core. Feb 12 19:21:05.400612 sshd[4011]: pam_unix(sshd:session): session closed for user core Feb 12 19:21:05.404828 systemd[1]: sshd@11-10.200.20.31:22-10.200.12.6:36548.service: Deactivated successfully. Feb 12 19:21:05.405638 systemd[1]: session-14.scope: Deactivated successfully. Feb 12 19:21:05.406654 systemd-logind[1367]: Session 14 logged out. Waiting for processes to exit. Feb 12 19:21:05.407449 systemd-logind[1367]: Removed session 14. Feb 12 19:21:10.470388 systemd[1]: Started sshd@12-10.200.20.31:22-10.200.12.6:36454.service. Feb 12 19:21:10.874147 sshd[4023]: Accepted publickey for core from 10.200.12.6 port 36454 ssh2: RSA SHA256:SfUjs7MZHb4dm/nWS32sXF6T7NXqXtYp+5K7LVbLt6U Feb 12 19:21:10.875884 sshd[4023]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:21:10.879977 systemd-logind[1367]: New session 15 of user core. Feb 12 19:21:10.880484 systemd[1]: Started session-15.scope. Feb 12 19:21:11.228336 sshd[4023]: pam_unix(sshd:session): session closed for user core Feb 12 19:21:11.230992 systemd-logind[1367]: Session 15 logged out. Waiting for processes to exit. Feb 12 19:21:11.231134 systemd[1]: session-15.scope: Deactivated successfully. Feb 12 19:21:11.231970 systemd[1]: sshd@12-10.200.20.31:22-10.200.12.6:36454.service: Deactivated successfully. Feb 12 19:21:11.233233 systemd-logind[1367]: Removed session 15. Feb 12 19:21:16.306113 systemd[1]: Started sshd@13-10.200.20.31:22-10.200.12.6:36468.service. Feb 12 19:21:16.710227 sshd[4037]: Accepted publickey for core from 10.200.12.6 port 36468 ssh2: RSA SHA256:SfUjs7MZHb4dm/nWS32sXF6T7NXqXtYp+5K7LVbLt6U Feb 12 19:21:16.711859 sshd[4037]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:21:16.716150 systemd[1]: Started session-16.scope. Feb 12 19:21:16.716762 systemd-logind[1367]: New session 16 of user core. Feb 12 19:21:17.067726 sshd[4037]: pam_unix(sshd:session): session closed for user core Feb 12 19:21:17.070523 systemd-logind[1367]: Session 16 logged out. Waiting for processes to exit. Feb 12 19:21:17.071084 systemd[1]: sshd@13-10.200.20.31:22-10.200.12.6:36468.service: Deactivated successfully. Feb 12 19:21:17.071844 systemd[1]: session-16.scope: Deactivated successfully. Feb 12 19:21:17.072801 systemd-logind[1367]: Removed session 16. Feb 12 19:21:17.134684 systemd[1]: Started sshd@14-10.200.20.31:22-10.200.12.6:53888.service. Feb 12 19:21:17.530108 sshd[4049]: Accepted publickey for core from 10.200.12.6 port 53888 ssh2: RSA SHA256:SfUjs7MZHb4dm/nWS32sXF6T7NXqXtYp+5K7LVbLt6U Feb 12 19:21:17.531375 sshd[4049]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:21:17.535976 systemd[1]: Started session-17.scope. Feb 12 19:21:17.536299 systemd-logind[1367]: New session 17 of user core. Feb 12 19:21:17.912534 sshd[4049]: pam_unix(sshd:session): session closed for user core Feb 12 19:21:17.915984 systemd[1]: sshd@14-10.200.20.31:22-10.200.12.6:53888.service: Deactivated successfully. Feb 12 19:21:17.916171 systemd-logind[1367]: Session 17 logged out. Waiting for processes to exit. Feb 12 19:21:17.916736 systemd[1]: session-17.scope: Deactivated successfully. Feb 12 19:21:17.917590 systemd-logind[1367]: Removed session 17. Feb 12 19:21:17.981935 systemd[1]: Started sshd@15-10.200.20.31:22-10.200.12.6:53892.service. Feb 12 19:21:18.385999 sshd[4059]: Accepted publickey for core from 10.200.12.6 port 53892 ssh2: RSA SHA256:SfUjs7MZHb4dm/nWS32sXF6T7NXqXtYp+5K7LVbLt6U Feb 12 19:21:18.387492 sshd[4059]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:21:18.391259 systemd-logind[1367]: New session 18 of user core. Feb 12 19:21:18.391841 systemd[1]: Started session-18.scope. Feb 12 19:21:19.400244 sshd[4059]: pam_unix(sshd:session): session closed for user core Feb 12 19:21:19.403136 systemd-logind[1367]: Session 18 logged out. Waiting for processes to exit. Feb 12 19:21:19.403875 systemd[1]: sshd@15-10.200.20.31:22-10.200.12.6:53892.service: Deactivated successfully. Feb 12 19:21:19.404746 systemd[1]: session-18.scope: Deactivated successfully. Feb 12 19:21:19.405918 systemd-logind[1367]: Removed session 18. Feb 12 19:21:19.470746 systemd[1]: Started sshd@16-10.200.20.31:22-10.200.12.6:53900.service. Feb 12 19:21:19.867305 sshd[4124]: Accepted publickey for core from 10.200.12.6 port 53900 ssh2: RSA SHA256:SfUjs7MZHb4dm/nWS32sXF6T7NXqXtYp+5K7LVbLt6U Feb 12 19:21:19.868598 sshd[4124]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:21:19.872636 systemd-logind[1367]: New session 19 of user core. Feb 12 19:21:19.873078 systemd[1]: Started session-19.scope. Feb 12 19:21:20.302734 sshd[4124]: pam_unix(sshd:session): session closed for user core Feb 12 19:21:20.305825 systemd[1]: sshd@16-10.200.20.31:22-10.200.12.6:53900.service: Deactivated successfully. Feb 12 19:21:20.306600 systemd[1]: session-19.scope: Deactivated successfully. Feb 12 19:21:20.307254 systemd-logind[1367]: Session 19 logged out. Waiting for processes to exit. Feb 12 19:21:20.308107 systemd-logind[1367]: Removed session 19. Feb 12 19:21:20.369671 systemd[1]: Started sshd@17-10.200.20.31:22-10.200.12.6:53914.service. Feb 12 19:21:20.767871 sshd[4134]: Accepted publickey for core from 10.200.12.6 port 53914 ssh2: RSA SHA256:SfUjs7MZHb4dm/nWS32sXF6T7NXqXtYp+5K7LVbLt6U Feb 12 19:21:20.769601 sshd[4134]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:21:20.773964 systemd[1]: Started session-20.scope. Feb 12 19:21:20.775253 systemd-logind[1367]: New session 20 of user core. Feb 12 19:21:21.116250 sshd[4134]: pam_unix(sshd:session): session closed for user core Feb 12 19:21:21.118745 systemd[1]: sshd@17-10.200.20.31:22-10.200.12.6:53914.service: Deactivated successfully. Feb 12 19:21:21.119566 systemd[1]: session-20.scope: Deactivated successfully. Feb 12 19:21:21.120113 systemd-logind[1367]: Session 20 logged out. Waiting for processes to exit. Feb 12 19:21:21.120862 systemd-logind[1367]: Removed session 20. Feb 12 19:21:26.184821 systemd[1]: Started sshd@18-10.200.20.31:22-10.200.12.6:53918.service. Feb 12 19:21:26.580145 sshd[4173]: Accepted publickey for core from 10.200.12.6 port 53918 ssh2: RSA SHA256:SfUjs7MZHb4dm/nWS32sXF6T7NXqXtYp+5K7LVbLt6U Feb 12 19:21:26.581766 sshd[4173]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:21:26.585927 systemd-logind[1367]: New session 21 of user core. Feb 12 19:21:26.586474 systemd[1]: Started session-21.scope. Feb 12 19:21:26.924960 sshd[4173]: pam_unix(sshd:session): session closed for user core Feb 12 19:21:26.927678 systemd-logind[1367]: Session 21 logged out. Waiting for processes to exit. Feb 12 19:21:26.928328 systemd[1]: sshd@18-10.200.20.31:22-10.200.12.6:53918.service: Deactivated successfully. Feb 12 19:21:26.929101 systemd[1]: session-21.scope: Deactivated successfully. Feb 12 19:21:26.929823 systemd-logind[1367]: Removed session 21. Feb 12 19:21:31.993230 systemd[1]: Started sshd@19-10.200.20.31:22-10.200.12.6:33382.service. Feb 12 19:21:32.396370 sshd[4186]: Accepted publickey for core from 10.200.12.6 port 33382 ssh2: RSA SHA256:SfUjs7MZHb4dm/nWS32sXF6T7NXqXtYp+5K7LVbLt6U Feb 12 19:21:32.397965 sshd[4186]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:21:32.401698 systemd-logind[1367]: New session 22 of user core. Feb 12 19:21:32.402205 systemd[1]: Started session-22.scope. Feb 12 19:21:32.744560 sshd[4186]: pam_unix(sshd:session): session closed for user core Feb 12 19:21:32.747248 systemd-logind[1367]: Session 22 logged out. Waiting for processes to exit. Feb 12 19:21:32.747489 systemd[1]: sshd@19-10.200.20.31:22-10.200.12.6:33382.service: Deactivated successfully. Feb 12 19:21:32.748254 systemd[1]: session-22.scope: Deactivated successfully. Feb 12 19:21:32.749025 systemd-logind[1367]: Removed session 22. Feb 12 19:21:37.815470 systemd[1]: Started sshd@20-10.200.20.31:22-10.200.12.6:52188.service. Feb 12 19:21:38.212361 sshd[4198]: Accepted publickey for core from 10.200.12.6 port 52188 ssh2: RSA SHA256:SfUjs7MZHb4dm/nWS32sXF6T7NXqXtYp+5K7LVbLt6U Feb 12 19:21:38.213959 sshd[4198]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:21:38.218253 systemd[1]: Started session-23.scope. Feb 12 19:21:38.219594 systemd-logind[1367]: New session 23 of user core. Feb 12 19:21:38.557146 sshd[4198]: pam_unix(sshd:session): session closed for user core Feb 12 19:21:38.559835 systemd[1]: sshd@20-10.200.20.31:22-10.200.12.6:52188.service: Deactivated successfully. Feb 12 19:21:38.560636 systemd[1]: session-23.scope: Deactivated successfully. Feb 12 19:21:38.561250 systemd-logind[1367]: Session 23 logged out. Waiting for processes to exit. Feb 12 19:21:38.562100 systemd-logind[1367]: Removed session 23. Feb 12 19:21:38.624088 systemd[1]: Started sshd@21-10.200.20.31:22-10.200.12.6:52204.service. Feb 12 19:21:39.020760 sshd[4210]: Accepted publickey for core from 10.200.12.6 port 52204 ssh2: RSA SHA256:SfUjs7MZHb4dm/nWS32sXF6T7NXqXtYp+5K7LVbLt6U Feb 12 19:21:39.022062 sshd[4210]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:21:39.026448 systemd[1]: Started session-24.scope. Feb 12 19:21:39.026988 systemd-logind[1367]: New session 24 of user core. Feb 12 19:21:40.710317 systemd[1]: run-containerd-runc-k8s.io-38cd8ee0376c042b30ad886fa0ba5335f72c7de918ea6b0588e8c967c33bedcc-runc.Vzk04A.mount: Deactivated successfully. Feb 12 19:21:40.714775 env[1382]: time="2024-02-12T19:21:40.714738528Z" level=info msg="StopContainer for \"3d9e33bf68881525a4518e699c60a024949580daf487c058f66d0b1719d130a4\" with timeout 30 (s)" Feb 12 19:21:40.715483 env[1382]: time="2024-02-12T19:21:40.715446101Z" level=info msg="Stop container \"3d9e33bf68881525a4518e699c60a024949580daf487c058f66d0b1719d130a4\" with signal terminated" Feb 12 19:21:40.730674 env[1382]: time="2024-02-12T19:21:40.730598560Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 12 19:21:40.732377 systemd[1]: cri-containerd-3d9e33bf68881525a4518e699c60a024949580daf487c058f66d0b1719d130a4.scope: Deactivated successfully. Feb 12 19:21:40.741259 env[1382]: time="2024-02-12T19:21:40.741211181Z" level=info msg="StopContainer for \"38cd8ee0376c042b30ad886fa0ba5335f72c7de918ea6b0588e8c967c33bedcc\" with timeout 1 (s)" Feb 12 19:21:40.741685 env[1382]: time="2024-02-12T19:21:40.741649308Z" level=info msg="Stop container \"38cd8ee0376c042b30ad886fa0ba5335f72c7de918ea6b0588e8c967c33bedcc\" with signal terminated" Feb 12 19:21:40.750557 systemd-networkd[1527]: lxc_health: Link DOWN Feb 12 19:21:40.750563 systemd-networkd[1527]: lxc_health: Lost carrier Feb 12 19:21:40.767775 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3d9e33bf68881525a4518e699c60a024949580daf487c058f66d0b1719d130a4-rootfs.mount: Deactivated successfully. Feb 12 19:21:40.773146 systemd[1]: cri-containerd-38cd8ee0376c042b30ad886fa0ba5335f72c7de918ea6b0588e8c967c33bedcc.scope: Deactivated successfully. Feb 12 19:21:40.773471 systemd[1]: cri-containerd-38cd8ee0376c042b30ad886fa0ba5335f72c7de918ea6b0588e8c967c33bedcc.scope: Consumed 6.560s CPU time. Feb 12 19:21:40.792171 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-38cd8ee0376c042b30ad886fa0ba5335f72c7de918ea6b0588e8c967c33bedcc-rootfs.mount: Deactivated successfully. Feb 12 19:21:40.857976 env[1382]: time="2024-02-12T19:21:40.857915776Z" level=info msg="shim disconnected" id=3d9e33bf68881525a4518e699c60a024949580daf487c058f66d0b1719d130a4 Feb 12 19:21:40.857976 env[1382]: time="2024-02-12T19:21:40.857971457Z" level=warning msg="cleaning up after shim disconnected" id=3d9e33bf68881525a4518e699c60a024949580daf487c058f66d0b1719d130a4 namespace=k8s.io Feb 12 19:21:40.857976 env[1382]: time="2024-02-12T19:21:40.857983737Z" level=info msg="cleaning up dead shim" Feb 12 19:21:40.858401 env[1382]: time="2024-02-12T19:21:40.858360783Z" level=info msg="shim disconnected" id=38cd8ee0376c042b30ad886fa0ba5335f72c7de918ea6b0588e8c967c33bedcc Feb 12 19:21:40.858401 env[1382]: time="2024-02-12T19:21:40.858397384Z" level=warning msg="cleaning up after shim disconnected" id=38cd8ee0376c042b30ad886fa0ba5335f72c7de918ea6b0588e8c967c33bedcc namespace=k8s.io Feb 12 19:21:40.858509 env[1382]: time="2024-02-12T19:21:40.858405744Z" level=info msg="cleaning up dead shim" Feb 12 19:21:40.865277 env[1382]: time="2024-02-12T19:21:40.865221861Z" level=warning msg="cleanup warnings time=\"2024-02-12T19:21:40Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4278 runtime=io.containerd.runc.v2\n" Feb 12 19:21:40.866882 env[1382]: time="2024-02-12T19:21:40.866843128Z" level=warning msg="cleanup warnings time=\"2024-02-12T19:21:40Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4277 runtime=io.containerd.runc.v2\n" Feb 12 19:21:40.885743 env[1382]: time="2024-02-12T19:21:40.885696851Z" level=info msg="StopContainer for \"38cd8ee0376c042b30ad886fa0ba5335f72c7de918ea6b0588e8c967c33bedcc\" returns successfully" Feb 12 19:21:40.886335 env[1382]: time="2024-02-12T19:21:40.886309581Z" level=info msg="StopPodSandbox for \"e93b5e626098f57bfe9c03a0798728af6655160c5539164774fedf5a44e5ff1b\"" Feb 12 19:21:40.886479 env[1382]: time="2024-02-12T19:21:40.886459064Z" level=info msg="Container to stop \"82fb86ce8854b2d32b92a570f5239f777aeeaca0f9b7a92ff8b8786600cf0f41\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 12 19:21:40.886572 env[1382]: time="2024-02-12T19:21:40.886555465Z" level=info msg="Container to stop \"8e3a44b19f7aa02d84927f91ec1731ac66ea59183bca6765bab989d42f423db8\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 12 19:21:40.886641 env[1382]: time="2024-02-12T19:21:40.886624186Z" level=info msg="Container to stop \"5f1361e5c74f9cb0836f45dc31d4fe0ac05f914a72429643636fbecf32e53fbc\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 12 19:21:40.886705 env[1382]: time="2024-02-12T19:21:40.886690308Z" level=info msg="Container to stop \"4a57f932d970a5c2bee2b8a463d45bb25d06cf2cfbaf77f23b4f0a376b5e5b42\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 12 19:21:40.886762 env[1382]: time="2024-02-12T19:21:40.886747269Z" level=info msg="Container to stop \"38cd8ee0376c042b30ad886fa0ba5335f72c7de918ea6b0588e8c967c33bedcc\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 12 19:21:40.888174 env[1382]: time="2024-02-12T19:21:40.888135012Z" level=info msg="StopContainer for \"3d9e33bf68881525a4518e699c60a024949580daf487c058f66d0b1719d130a4\" returns successfully" Feb 12 19:21:40.888645 env[1382]: time="2024-02-12T19:21:40.888620781Z" level=info msg="StopPodSandbox for \"0e18c8286a555e33d3bc14da3455609ded3bea1f2a25ec3a59cf48dd339a128c\"" Feb 12 19:21:40.888777 env[1382]: time="2024-02-12T19:21:40.888758063Z" level=info msg="Container to stop \"3d9e33bf68881525a4518e699c60a024949580daf487c058f66d0b1719d130a4\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 12 19:21:40.894189 systemd[1]: cri-containerd-0e18c8286a555e33d3bc14da3455609ded3bea1f2a25ec3a59cf48dd339a128c.scope: Deactivated successfully. Feb 12 19:21:40.897962 systemd[1]: cri-containerd-e93b5e626098f57bfe9c03a0798728af6655160c5539164774fedf5a44e5ff1b.scope: Deactivated successfully. Feb 12 19:21:40.941394 env[1382]: time="2024-02-12T19:21:40.941344522Z" level=info msg="shim disconnected" id=0e18c8286a555e33d3bc14da3455609ded3bea1f2a25ec3a59cf48dd339a128c Feb 12 19:21:40.941907 env[1382]: time="2024-02-12T19:21:40.941885571Z" level=warning msg="cleaning up after shim disconnected" id=0e18c8286a555e33d3bc14da3455609ded3bea1f2a25ec3a59cf48dd339a128c namespace=k8s.io Feb 12 19:21:40.941999 env[1382]: time="2024-02-12T19:21:40.941985293Z" level=info msg="cleaning up dead shim" Feb 12 19:21:40.942460 env[1382]: time="2024-02-12T19:21:40.941879651Z" level=info msg="shim disconnected" id=e93b5e626098f57bfe9c03a0798728af6655160c5539164774fedf5a44e5ff1b Feb 12 19:21:40.942540 env[1382]: time="2024-02-12T19:21:40.942463901Z" level=warning msg="cleaning up after shim disconnected" id=e93b5e626098f57bfe9c03a0798728af6655160c5539164774fedf5a44e5ff1b namespace=k8s.io Feb 12 19:21:40.942540 env[1382]: time="2024-02-12T19:21:40.942477381Z" level=info msg="cleaning up dead shim" Feb 12 19:21:40.950426 env[1382]: time="2024-02-12T19:21:40.950385076Z" level=warning msg="cleanup warnings time=\"2024-02-12T19:21:40Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4340 runtime=io.containerd.runc.v2\n" Feb 12 19:21:40.950966 env[1382]: time="2024-02-12T19:21:40.950938926Z" level=info msg="TearDown network for sandbox \"0e18c8286a555e33d3bc14da3455609ded3bea1f2a25ec3a59cf48dd339a128c\" successfully" Feb 12 19:21:40.951056 env[1382]: time="2024-02-12T19:21:40.951039208Z" level=info msg="StopPodSandbox for \"0e18c8286a555e33d3bc14da3455609ded3bea1f2a25ec3a59cf48dd339a128c\" returns successfully" Feb 12 19:21:40.951201 env[1382]: time="2024-02-12T19:21:40.951092328Z" level=warning msg="cleanup warnings time=\"2024-02-12T19:21:40Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4341 runtime=io.containerd.runc.v2\n" Feb 12 19:21:40.951879 env[1382]: time="2024-02-12T19:21:40.951855101Z" level=info msg="TearDown network for sandbox \"e93b5e626098f57bfe9c03a0798728af6655160c5539164774fedf5a44e5ff1b\" successfully" Feb 12 19:21:40.951981 env[1382]: time="2024-02-12T19:21:40.951964143Z" level=info msg="StopPodSandbox for \"e93b5e626098f57bfe9c03a0798728af6655160c5539164774fedf5a44e5ff1b\" returns successfully" Feb 12 19:21:40.967543 kubelet[2501]: I0212 19:21:40.964450 2501 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rq4w7\" (UniqueName: \"kubernetes.io/projected/b64db31d-d12a-47be-9728-06a0de150be9-kube-api-access-rq4w7\") pod \"b64db31d-d12a-47be-9728-06a0de150be9\" (UID: \"b64db31d-d12a-47be-9728-06a0de150be9\") " Feb 12 19:21:40.967543 kubelet[2501]: I0212 19:21:40.964515 2501 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/849642ce-03a5-4154-ba6d-ebabf8c717fc-hostproc\") pod \"849642ce-03a5-4154-ba6d-ebabf8c717fc\" (UID: \"849642ce-03a5-4154-ba6d-ebabf8c717fc\") " Feb 12 19:21:40.967543 kubelet[2501]: I0212 19:21:40.964537 2501 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/849642ce-03a5-4154-ba6d-ebabf8c717fc-hubble-tls\") pod \"849642ce-03a5-4154-ba6d-ebabf8c717fc\" (UID: \"849642ce-03a5-4154-ba6d-ebabf8c717fc\") " Feb 12 19:21:40.967543 kubelet[2501]: I0212 19:21:40.964555 2501 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/849642ce-03a5-4154-ba6d-ebabf8c717fc-cni-path\") pod \"849642ce-03a5-4154-ba6d-ebabf8c717fc\" (UID: \"849642ce-03a5-4154-ba6d-ebabf8c717fc\") " Feb 12 19:21:40.967543 kubelet[2501]: I0212 19:21:40.964610 2501 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/849642ce-03a5-4154-ba6d-ebabf8c717fc-host-proc-sys-net\") pod \"849642ce-03a5-4154-ba6d-ebabf8c717fc\" (UID: \"849642ce-03a5-4154-ba6d-ebabf8c717fc\") " Feb 12 19:21:40.967543 kubelet[2501]: I0212 19:21:40.964629 2501 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/849642ce-03a5-4154-ba6d-ebabf8c717fc-cilium-run\") pod \"849642ce-03a5-4154-ba6d-ebabf8c717fc\" (UID: \"849642ce-03a5-4154-ba6d-ebabf8c717fc\") " Feb 12 19:21:40.968001 kubelet[2501]: I0212 19:21:40.964657 2501 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/849642ce-03a5-4154-ba6d-ebabf8c717fc-bpf-maps\") pod \"849642ce-03a5-4154-ba6d-ebabf8c717fc\" (UID: \"849642ce-03a5-4154-ba6d-ebabf8c717fc\") " Feb 12 19:21:40.968001 kubelet[2501]: I0212 19:21:40.964676 2501 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/849642ce-03a5-4154-ba6d-ebabf8c717fc-cilium-cgroup\") pod \"849642ce-03a5-4154-ba6d-ebabf8c717fc\" (UID: \"849642ce-03a5-4154-ba6d-ebabf8c717fc\") " Feb 12 19:21:40.968001 kubelet[2501]: I0212 19:21:40.964693 2501 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/849642ce-03a5-4154-ba6d-ebabf8c717fc-xtables-lock\") pod \"849642ce-03a5-4154-ba6d-ebabf8c717fc\" (UID: \"849642ce-03a5-4154-ba6d-ebabf8c717fc\") " Feb 12 19:21:40.968001 kubelet[2501]: I0212 19:21:40.964709 2501 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/849642ce-03a5-4154-ba6d-ebabf8c717fc-etc-cni-netd\") pod \"849642ce-03a5-4154-ba6d-ebabf8c717fc\" (UID: \"849642ce-03a5-4154-ba6d-ebabf8c717fc\") " Feb 12 19:21:40.968001 kubelet[2501]: I0212 19:21:40.964741 2501 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/849642ce-03a5-4154-ba6d-ebabf8c717fc-cilium-config-path\") pod \"849642ce-03a5-4154-ba6d-ebabf8c717fc\" (UID: \"849642ce-03a5-4154-ba6d-ebabf8c717fc\") " Feb 12 19:21:40.968001 kubelet[2501]: I0212 19:21:40.964765 2501 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"kube-api-access-849kx\" (UniqueName: \"kubernetes.io/projected/849642ce-03a5-4154-ba6d-ebabf8c717fc-kube-api-access-849kx\") pod \"849642ce-03a5-4154-ba6d-ebabf8c717fc\" (UID: \"849642ce-03a5-4154-ba6d-ebabf8c717fc\") " Feb 12 19:21:40.968140 kubelet[2501]: I0212 19:21:40.964784 2501 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/849642ce-03a5-4154-ba6d-ebabf8c717fc-clustermesh-secrets\") pod \"849642ce-03a5-4154-ba6d-ebabf8c717fc\" (UID: \"849642ce-03a5-4154-ba6d-ebabf8c717fc\") " Feb 12 19:21:40.968140 kubelet[2501]: I0212 19:21:40.964814 2501 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b64db31d-d12a-47be-9728-06a0de150be9-cilium-config-path\") pod \"b64db31d-d12a-47be-9728-06a0de150be9\" (UID: \"b64db31d-d12a-47be-9728-06a0de150be9\") " Feb 12 19:21:40.968140 kubelet[2501]: I0212 19:21:40.964836 2501 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/849642ce-03a5-4154-ba6d-ebabf8c717fc-lib-modules\") pod \"849642ce-03a5-4154-ba6d-ebabf8c717fc\" (UID: \"849642ce-03a5-4154-ba6d-ebabf8c717fc\") " Feb 12 19:21:40.968140 kubelet[2501]: I0212 19:21:40.964855 2501 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/849642ce-03a5-4154-ba6d-ebabf8c717fc-host-proc-sys-kernel\") pod \"849642ce-03a5-4154-ba6d-ebabf8c717fc\" (UID: \"849642ce-03a5-4154-ba6d-ebabf8c717fc\") " Feb 12 19:21:40.968140 kubelet[2501]: I0212 19:21:40.964920 2501 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/849642ce-03a5-4154-ba6d-ebabf8c717fc-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "849642ce-03a5-4154-ba6d-ebabf8c717fc" (UID: "849642ce-03a5-4154-ba6d-ebabf8c717fc"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 19:21:40.968253 kubelet[2501]: I0212 19:21:40.965544 2501 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/849642ce-03a5-4154-ba6d-ebabf8c717fc-hostproc" (OuterVolumeSpecName: "hostproc") pod "849642ce-03a5-4154-ba6d-ebabf8c717fc" (UID: "849642ce-03a5-4154-ba6d-ebabf8c717fc"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 19:21:40.968253 kubelet[2501]: I0212 19:21:40.965773 2501 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/849642ce-03a5-4154-ba6d-ebabf8c717fc-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "849642ce-03a5-4154-ba6d-ebabf8c717fc" (UID: "849642ce-03a5-4154-ba6d-ebabf8c717fc"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 19:21:40.968253 kubelet[2501]: I0212 19:21:40.965809 2501 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/849642ce-03a5-4154-ba6d-ebabf8c717fc-cni-path" (OuterVolumeSpecName: "cni-path") pod "849642ce-03a5-4154-ba6d-ebabf8c717fc" (UID: "849642ce-03a5-4154-ba6d-ebabf8c717fc"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 19:21:40.968253 kubelet[2501]: I0212 19:21:40.965825 2501 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/849642ce-03a5-4154-ba6d-ebabf8c717fc-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "849642ce-03a5-4154-ba6d-ebabf8c717fc" (UID: "849642ce-03a5-4154-ba6d-ebabf8c717fc"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 19:21:40.968253 kubelet[2501]: I0212 19:21:40.965840 2501 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/849642ce-03a5-4154-ba6d-ebabf8c717fc-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "849642ce-03a5-4154-ba6d-ebabf8c717fc" (UID: "849642ce-03a5-4154-ba6d-ebabf8c717fc"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 19:21:40.968371 kubelet[2501]: I0212 19:21:40.965854 2501 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/849642ce-03a5-4154-ba6d-ebabf8c717fc-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "849642ce-03a5-4154-ba6d-ebabf8c717fc" (UID: "849642ce-03a5-4154-ba6d-ebabf8c717fc"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 19:21:40.968371 kubelet[2501]: I0212 19:21:40.965871 2501 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/849642ce-03a5-4154-ba6d-ebabf8c717fc-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "849642ce-03a5-4154-ba6d-ebabf8c717fc" (UID: "849642ce-03a5-4154-ba6d-ebabf8c717fc"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 19:21:40.968371 kubelet[2501]: I0212 19:21:40.966050 2501 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/849642ce-03a5-4154-ba6d-ebabf8c717fc-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "849642ce-03a5-4154-ba6d-ebabf8c717fc" (UID: "849642ce-03a5-4154-ba6d-ebabf8c717fc"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 19:21:40.968371 kubelet[2501]: W0212 19:21:40.966147 2501 empty_dir.go:525] Warning: Failed to clear quota on /var/lib/kubelet/pods/849642ce-03a5-4154-ba6d-ebabf8c717fc/volumes/kubernetes.io~configmap/cilium-config-path: clearQuota called, but quotas disabled Feb 12 19:21:40.968598 kubelet[2501]: I0212 19:21:40.968574 2501 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b64db31d-d12a-47be-9728-06a0de150be9-kube-api-access-rq4w7" (OuterVolumeSpecName: "kube-api-access-rq4w7") pod "b64db31d-d12a-47be-9728-06a0de150be9" (UID: "b64db31d-d12a-47be-9728-06a0de150be9"). InnerVolumeSpecName "kube-api-access-rq4w7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 12 19:21:40.968713 kubelet[2501]: I0212 19:21:40.968700 2501 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/849642ce-03a5-4154-ba6d-ebabf8c717fc-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "849642ce-03a5-4154-ba6d-ebabf8c717fc" (UID: "849642ce-03a5-4154-ba6d-ebabf8c717fc"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 19:21:40.968911 kubelet[2501]: W0212 19:21:40.968897 2501 empty_dir.go:525] Warning: Failed to clear quota on /var/lib/kubelet/pods/b64db31d-d12a-47be-9728-06a0de150be9/volumes/kubernetes.io~configmap/cilium-config-path: clearQuota called, but quotas disabled Feb 12 19:21:40.970946 kubelet[2501]: I0212 19:21:40.970905 2501 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b64db31d-d12a-47be-9728-06a0de150be9-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "b64db31d-d12a-47be-9728-06a0de150be9" (UID: "b64db31d-d12a-47be-9728-06a0de150be9"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 12 19:21:40.971143 kubelet[2501]: I0212 19:21:40.971127 2501 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/849642ce-03a5-4154-ba6d-ebabf8c717fc-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "849642ce-03a5-4154-ba6d-ebabf8c717fc" (UID: "849642ce-03a5-4154-ba6d-ebabf8c717fc"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 12 19:21:40.971662 kubelet[2501]: I0212 19:21:40.971642 2501 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/849642ce-03a5-4154-ba6d-ebabf8c717fc-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "849642ce-03a5-4154-ba6d-ebabf8c717fc" (UID: "849642ce-03a5-4154-ba6d-ebabf8c717fc"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 12 19:21:40.973471 kubelet[2501]: I0212 19:21:40.973446 2501 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/849642ce-03a5-4154-ba6d-ebabf8c717fc-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "849642ce-03a5-4154-ba6d-ebabf8c717fc" (UID: "849642ce-03a5-4154-ba6d-ebabf8c717fc"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 12 19:21:40.975105 kubelet[2501]: I0212 19:21:40.975065 2501 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/849642ce-03a5-4154-ba6d-ebabf8c717fc-kube-api-access-849kx" (OuterVolumeSpecName: "kube-api-access-849kx") pod "849642ce-03a5-4154-ba6d-ebabf8c717fc" (UID: "849642ce-03a5-4154-ba6d-ebabf8c717fc"). InnerVolumeSpecName "kube-api-access-849kx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 12 19:21:41.065526 kubelet[2501]: I0212 19:21:41.065477 2501 reconciler_common.go:295] "Volume detached for volume \"kube-api-access-849kx\" (UniqueName: \"kubernetes.io/projected/849642ce-03a5-4154-ba6d-ebabf8c717fc-kube-api-access-849kx\") on node \"ci-3510.3.2-a-e08ac1c56f\" DevicePath \"\"" Feb 12 19:21:41.065722 kubelet[2501]: I0212 19:21:41.065711 2501 reconciler_common.go:295] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/849642ce-03a5-4154-ba6d-ebabf8c717fc-clustermesh-secrets\") on node \"ci-3510.3.2-a-e08ac1c56f\" DevicePath \"\"" Feb 12 19:21:41.065782 kubelet[2501]: I0212 19:21:41.065774 2501 reconciler_common.go:295] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b64db31d-d12a-47be-9728-06a0de150be9-cilium-config-path\") on node \"ci-3510.3.2-a-e08ac1c56f\" DevicePath \"\"" Feb 12 19:21:41.065840 kubelet[2501]: I0212 19:21:41.065831 2501 reconciler_common.go:295] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/849642ce-03a5-4154-ba6d-ebabf8c717fc-lib-modules\") on node \"ci-3510.3.2-a-e08ac1c56f\" DevicePath \"\"" Feb 12 19:21:41.065894 kubelet[2501]: I0212 19:21:41.065886 2501 reconciler_common.go:295] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/849642ce-03a5-4154-ba6d-ebabf8c717fc-host-proc-sys-kernel\") on node \"ci-3510.3.2-a-e08ac1c56f\" DevicePath \"\"" Feb 12 19:21:41.065960 kubelet[2501]: I0212 19:21:41.065952 2501 reconciler_common.go:295] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/849642ce-03a5-4154-ba6d-ebabf8c717fc-hostproc\") on node \"ci-3510.3.2-a-e08ac1c56f\" DevicePath \"\"" Feb 12 19:21:41.066019 kubelet[2501]: I0212 19:21:41.066008 2501 reconciler_common.go:295] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/849642ce-03a5-4154-ba6d-ebabf8c717fc-hubble-tls\") on node \"ci-3510.3.2-a-e08ac1c56f\" DevicePath \"\"" Feb 12 19:21:41.066081 kubelet[2501]: I0212 19:21:41.066073 2501 reconciler_common.go:295] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/849642ce-03a5-4154-ba6d-ebabf8c717fc-cni-path\") on node \"ci-3510.3.2-a-e08ac1c56f\" DevicePath \"\"" Feb 12 19:21:41.066141 kubelet[2501]: I0212 19:21:41.066133 2501 reconciler_common.go:295] "Volume detached for volume \"kube-api-access-rq4w7\" (UniqueName: \"kubernetes.io/projected/b64db31d-d12a-47be-9728-06a0de150be9-kube-api-access-rq4w7\") on node \"ci-3510.3.2-a-e08ac1c56f\" DevicePath \"\"" Feb 12 19:21:41.066203 kubelet[2501]: I0212 19:21:41.066191 2501 reconciler_common.go:295] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/849642ce-03a5-4154-ba6d-ebabf8c717fc-host-proc-sys-net\") on node \"ci-3510.3.2-a-e08ac1c56f\" DevicePath \"\"" Feb 12 19:21:41.066264 kubelet[2501]: I0212 19:21:41.066255 2501 reconciler_common.go:295] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/849642ce-03a5-4154-ba6d-ebabf8c717fc-bpf-maps\") on node \"ci-3510.3.2-a-e08ac1c56f\" DevicePath \"\"" Feb 12 19:21:41.066321 kubelet[2501]: I0212 19:21:41.066312 2501 reconciler_common.go:295] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/849642ce-03a5-4154-ba6d-ebabf8c717fc-cilium-cgroup\") on node \"ci-3510.3.2-a-e08ac1c56f\" DevicePath \"\"" Feb 12 19:21:41.066376 kubelet[2501]: I0212 19:21:41.066367 2501 reconciler_common.go:295] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/849642ce-03a5-4154-ba6d-ebabf8c717fc-cilium-run\") on node \"ci-3510.3.2-a-e08ac1c56f\" DevicePath \"\"" Feb 12 19:21:41.066433 kubelet[2501]: I0212 19:21:41.066425 2501 reconciler_common.go:295] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/849642ce-03a5-4154-ba6d-ebabf8c717fc-cilium-config-path\") on node \"ci-3510.3.2-a-e08ac1c56f\" DevicePath \"\"" Feb 12 19:21:41.066490 kubelet[2501]: I0212 19:21:41.066481 2501 reconciler_common.go:295] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/849642ce-03a5-4154-ba6d-ebabf8c717fc-xtables-lock\") on node \"ci-3510.3.2-a-e08ac1c56f\" DevicePath \"\"" Feb 12 19:21:41.066575 kubelet[2501]: I0212 19:21:41.066566 2501 reconciler_common.go:295] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/849642ce-03a5-4154-ba6d-ebabf8c717fc-etc-cni-netd\") on node \"ci-3510.3.2-a-e08ac1c56f\" DevicePath \"\"" Feb 12 19:21:41.315754 kubelet[2501]: I0212 19:21:41.315728 2501 scope.go:115] "RemoveContainer" containerID="38cd8ee0376c042b30ad886fa0ba5335f72c7de918ea6b0588e8c967c33bedcc" Feb 12 19:21:41.317557 env[1382]: time="2024-02-12T19:21:41.317492803Z" level=info msg="RemoveContainer for \"38cd8ee0376c042b30ad886fa0ba5335f72c7de918ea6b0588e8c967c33bedcc\"" Feb 12 19:21:41.320354 systemd[1]: Removed slice kubepods-burstable-pod849642ce_03a5_4154_ba6d_ebabf8c717fc.slice. Feb 12 19:21:41.320436 systemd[1]: kubepods-burstable-pod849642ce_03a5_4154_ba6d_ebabf8c717fc.slice: Consumed 6.647s CPU time. Feb 12 19:21:41.323943 systemd[1]: Removed slice kubepods-besteffort-podb64db31d_d12a_47be_9728_06a0de150be9.slice. Feb 12 19:21:41.339365 env[1382]: time="2024-02-12T19:21:41.339313774Z" level=info msg="RemoveContainer for \"38cd8ee0376c042b30ad886fa0ba5335f72c7de918ea6b0588e8c967c33bedcc\" returns successfully" Feb 12 19:21:41.339640 kubelet[2501]: I0212 19:21:41.339619 2501 scope.go:115] "RemoveContainer" containerID="4a57f932d970a5c2bee2b8a463d45bb25d06cf2cfbaf77f23b4f0a376b5e5b42" Feb 12 19:21:41.341986 env[1382]: time="2024-02-12T19:21:41.341715295Z" level=info msg="RemoveContainer for \"4a57f932d970a5c2bee2b8a463d45bb25d06cf2cfbaf77f23b4f0a376b5e5b42\"" Feb 12 19:21:41.349883 env[1382]: time="2024-02-12T19:21:41.349765832Z" level=info msg="RemoveContainer for \"4a57f932d970a5c2bee2b8a463d45bb25d06cf2cfbaf77f23b4f0a376b5e5b42\" returns successfully" Feb 12 19:21:41.350016 kubelet[2501]: I0212 19:21:41.349995 2501 scope.go:115] "RemoveContainer" containerID="8e3a44b19f7aa02d84927f91ec1731ac66ea59183bca6765bab989d42f423db8" Feb 12 19:21:41.351259 env[1382]: time="2024-02-12T19:21:41.351222657Z" level=info msg="RemoveContainer for \"8e3a44b19f7aa02d84927f91ec1731ac66ea59183bca6765bab989d42f423db8\"" Feb 12 19:21:41.360952 env[1382]: time="2024-02-12T19:21:41.360901261Z" level=info msg="RemoveContainer for \"8e3a44b19f7aa02d84927f91ec1731ac66ea59183bca6765bab989d42f423db8\" returns successfully" Feb 12 19:21:41.361214 kubelet[2501]: I0212 19:21:41.361192 2501 scope.go:115] "RemoveContainer" containerID="5f1361e5c74f9cb0836f45dc31d4fe0ac05f914a72429643636fbecf32e53fbc" Feb 12 19:21:41.362538 env[1382]: time="2024-02-12T19:21:41.362274124Z" level=info msg="RemoveContainer for \"5f1361e5c74f9cb0836f45dc31d4fe0ac05f914a72429643636fbecf32e53fbc\"" Feb 12 19:21:41.376160 env[1382]: time="2024-02-12T19:21:41.376037238Z" level=info msg="RemoveContainer for \"5f1361e5c74f9cb0836f45dc31d4fe0ac05f914a72429643636fbecf32e53fbc\" returns successfully" Feb 12 19:21:41.376443 kubelet[2501]: I0212 19:21:41.376426 2501 scope.go:115] "RemoveContainer" containerID="82fb86ce8854b2d32b92a570f5239f777aeeaca0f9b7a92ff8b8786600cf0f41" Feb 12 19:21:41.377483 env[1382]: time="2024-02-12T19:21:41.377450622Z" level=info msg="RemoveContainer for \"82fb86ce8854b2d32b92a570f5239f777aeeaca0f9b7a92ff8b8786600cf0f41\"" Feb 12 19:21:41.388392 env[1382]: time="2024-02-12T19:21:41.388349128Z" level=info msg="RemoveContainer for \"82fb86ce8854b2d32b92a570f5239f777aeeaca0f9b7a92ff8b8786600cf0f41\" returns successfully" Feb 12 19:21:41.388664 kubelet[2501]: I0212 19:21:41.388638 2501 scope.go:115] "RemoveContainer" containerID="38cd8ee0376c042b30ad886fa0ba5335f72c7de918ea6b0588e8c967c33bedcc" Feb 12 19:21:41.388945 env[1382]: time="2024-02-12T19:21:41.388872697Z" level=error msg="ContainerStatus for \"38cd8ee0376c042b30ad886fa0ba5335f72c7de918ea6b0588e8c967c33bedcc\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"38cd8ee0376c042b30ad886fa0ba5335f72c7de918ea6b0588e8c967c33bedcc\": not found" Feb 12 19:21:41.389084 kubelet[2501]: E0212 19:21:41.389059 2501 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"38cd8ee0376c042b30ad886fa0ba5335f72c7de918ea6b0588e8c967c33bedcc\": not found" containerID="38cd8ee0376c042b30ad886fa0ba5335f72c7de918ea6b0588e8c967c33bedcc" Feb 12 19:21:41.389131 kubelet[2501]: I0212 19:21:41.389097 2501 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:38cd8ee0376c042b30ad886fa0ba5335f72c7de918ea6b0588e8c967c33bedcc} err="failed to get container status \"38cd8ee0376c042b30ad886fa0ba5335f72c7de918ea6b0588e8c967c33bedcc\": rpc error: code = NotFound desc = an error occurred when try to find container \"38cd8ee0376c042b30ad886fa0ba5335f72c7de918ea6b0588e8c967c33bedcc\": not found" Feb 12 19:21:41.389131 kubelet[2501]: I0212 19:21:41.389107 2501 scope.go:115] "RemoveContainer" containerID="4a57f932d970a5c2bee2b8a463d45bb25d06cf2cfbaf77f23b4f0a376b5e5b42" Feb 12 19:21:41.389384 env[1382]: time="2024-02-12T19:21:41.389304624Z" level=error msg="ContainerStatus for \"4a57f932d970a5c2bee2b8a463d45bb25d06cf2cfbaf77f23b4f0a376b5e5b42\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"4a57f932d970a5c2bee2b8a463d45bb25d06cf2cfbaf77f23b4f0a376b5e5b42\": not found" Feb 12 19:21:41.389446 kubelet[2501]: E0212 19:21:41.389427 2501 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"4a57f932d970a5c2bee2b8a463d45bb25d06cf2cfbaf77f23b4f0a376b5e5b42\": not found" containerID="4a57f932d970a5c2bee2b8a463d45bb25d06cf2cfbaf77f23b4f0a376b5e5b42" Feb 12 19:21:41.389477 kubelet[2501]: I0212 19:21:41.389446 2501 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:4a57f932d970a5c2bee2b8a463d45bb25d06cf2cfbaf77f23b4f0a376b5e5b42} err="failed to get container status \"4a57f932d970a5c2bee2b8a463d45bb25d06cf2cfbaf77f23b4f0a376b5e5b42\": rpc error: code = NotFound desc = an error occurred when try to find container \"4a57f932d970a5c2bee2b8a463d45bb25d06cf2cfbaf77f23b4f0a376b5e5b42\": not found" Feb 12 19:21:41.389477 kubelet[2501]: I0212 19:21:41.389455 2501 scope.go:115] "RemoveContainer" containerID="8e3a44b19f7aa02d84927f91ec1731ac66ea59183bca6765bab989d42f423db8" Feb 12 19:21:41.389665 env[1382]: time="2024-02-12T19:21:41.389614549Z" level=error msg="ContainerStatus for \"8e3a44b19f7aa02d84927f91ec1731ac66ea59183bca6765bab989d42f423db8\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"8e3a44b19f7aa02d84927f91ec1731ac66ea59183bca6765bab989d42f423db8\": not found" Feb 12 19:21:41.389840 kubelet[2501]: E0212 19:21:41.389820 2501 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"8e3a44b19f7aa02d84927f91ec1731ac66ea59183bca6765bab989d42f423db8\": not found" containerID="8e3a44b19f7aa02d84927f91ec1731ac66ea59183bca6765bab989d42f423db8" Feb 12 19:21:41.389931 kubelet[2501]: I0212 19:21:41.389921 2501 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:8e3a44b19f7aa02d84927f91ec1731ac66ea59183bca6765bab989d42f423db8} err="failed to get container status \"8e3a44b19f7aa02d84927f91ec1731ac66ea59183bca6765bab989d42f423db8\": rpc error: code = NotFound desc = an error occurred when try to find container \"8e3a44b19f7aa02d84927f91ec1731ac66ea59183bca6765bab989d42f423db8\": not found" Feb 12 19:21:41.390006 kubelet[2501]: I0212 19:21:41.389997 2501 scope.go:115] "RemoveContainer" containerID="5f1361e5c74f9cb0836f45dc31d4fe0ac05f914a72429643636fbecf32e53fbc" Feb 12 19:21:41.390304 env[1382]: time="2024-02-12T19:21:41.390228720Z" level=error msg="ContainerStatus for \"5f1361e5c74f9cb0836f45dc31d4fe0ac05f914a72429643636fbecf32e53fbc\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"5f1361e5c74f9cb0836f45dc31d4fe0ac05f914a72429643636fbecf32e53fbc\": not found" Feb 12 19:21:41.390370 kubelet[2501]: E0212 19:21:41.390356 2501 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"5f1361e5c74f9cb0836f45dc31d4fe0ac05f914a72429643636fbecf32e53fbc\": not found" containerID="5f1361e5c74f9cb0836f45dc31d4fe0ac05f914a72429643636fbecf32e53fbc" Feb 12 19:21:41.390400 kubelet[2501]: I0212 19:21:41.390381 2501 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:5f1361e5c74f9cb0836f45dc31d4fe0ac05f914a72429643636fbecf32e53fbc} err="failed to get container status \"5f1361e5c74f9cb0836f45dc31d4fe0ac05f914a72429643636fbecf32e53fbc\": rpc error: code = NotFound desc = an error occurred when try to find container \"5f1361e5c74f9cb0836f45dc31d4fe0ac05f914a72429643636fbecf32e53fbc\": not found" Feb 12 19:21:41.390400 kubelet[2501]: I0212 19:21:41.390391 2501 scope.go:115] "RemoveContainer" containerID="82fb86ce8854b2d32b92a570f5239f777aeeaca0f9b7a92ff8b8786600cf0f41" Feb 12 19:21:41.390691 env[1382]: time="2024-02-12T19:21:41.390634847Z" level=error msg="ContainerStatus for \"82fb86ce8854b2d32b92a570f5239f777aeeaca0f9b7a92ff8b8786600cf0f41\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"82fb86ce8854b2d32b92a570f5239f777aeeaca0f9b7a92ff8b8786600cf0f41\": not found" Feb 12 19:21:41.390816 kubelet[2501]: E0212 19:21:41.390798 2501 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"82fb86ce8854b2d32b92a570f5239f777aeeaca0f9b7a92ff8b8786600cf0f41\": not found" containerID="82fb86ce8854b2d32b92a570f5239f777aeeaca0f9b7a92ff8b8786600cf0f41" Feb 12 19:21:41.390872 kubelet[2501]: I0212 19:21:41.390827 2501 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:82fb86ce8854b2d32b92a570f5239f777aeeaca0f9b7a92ff8b8786600cf0f41} err="failed to get container status \"82fb86ce8854b2d32b92a570f5239f777aeeaca0f9b7a92ff8b8786600cf0f41\": rpc error: code = NotFound desc = an error occurred when try to find container \"82fb86ce8854b2d32b92a570f5239f777aeeaca0f9b7a92ff8b8786600cf0f41\": not found" Feb 12 19:21:41.390872 kubelet[2501]: I0212 19:21:41.390841 2501 scope.go:115] "RemoveContainer" containerID="3d9e33bf68881525a4518e699c60a024949580daf487c058f66d0b1719d130a4" Feb 12 19:21:41.392025 env[1382]: time="2024-02-12T19:21:41.391990030Z" level=info msg="RemoveContainer for \"3d9e33bf68881525a4518e699c60a024949580daf487c058f66d0b1719d130a4\"" Feb 12 19:21:41.411571 env[1382]: time="2024-02-12T19:21:41.411429440Z" level=info msg="RemoveContainer for \"3d9e33bf68881525a4518e699c60a024949580daf487c058f66d0b1719d130a4\" returns successfully" Feb 12 19:21:41.411784 kubelet[2501]: I0212 19:21:41.411760 2501 scope.go:115] "RemoveContainer" containerID="3d9e33bf68881525a4518e699c60a024949580daf487c058f66d0b1719d130a4" Feb 12 19:21:41.412223 env[1382]: time="2024-02-12T19:21:41.412161613Z" level=error msg="ContainerStatus for \"3d9e33bf68881525a4518e699c60a024949580daf487c058f66d0b1719d130a4\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"3d9e33bf68881525a4518e699c60a024949580daf487c058f66d0b1719d130a4\": not found" Feb 12 19:21:41.412349 kubelet[2501]: E0212 19:21:41.412331 2501 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"3d9e33bf68881525a4518e699c60a024949580daf487c058f66d0b1719d130a4\": not found" containerID="3d9e33bf68881525a4518e699c60a024949580daf487c058f66d0b1719d130a4" Feb 12 19:21:41.412400 kubelet[2501]: I0212 19:21:41.412369 2501 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:3d9e33bf68881525a4518e699c60a024949580daf487c058f66d0b1719d130a4} err="failed to get container status \"3d9e33bf68881525a4518e699c60a024949580daf487c058f66d0b1719d130a4\": rpc error: code = NotFound desc = an error occurred when try to find container \"3d9e33bf68881525a4518e699c60a024949580daf487c058f66d0b1719d130a4\": not found" Feb 12 19:21:41.701348 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e93b5e626098f57bfe9c03a0798728af6655160c5539164774fedf5a44e5ff1b-rootfs.mount: Deactivated successfully. Feb 12 19:21:41.701436 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-e93b5e626098f57bfe9c03a0798728af6655160c5539164774fedf5a44e5ff1b-shm.mount: Deactivated successfully. Feb 12 19:21:41.701524 systemd[1]: var-lib-kubelet-pods-849642ce\x2d03a5\x2d4154\x2dba6d\x2debabf8c717fc-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Feb 12 19:21:41.701580 systemd[1]: var-lib-kubelet-pods-849642ce\x2d03a5\x2d4154\x2dba6d\x2debabf8c717fc-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Feb 12 19:21:41.701630 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0e18c8286a555e33d3bc14da3455609ded3bea1f2a25ec3a59cf48dd339a128c-rootfs.mount: Deactivated successfully. Feb 12 19:21:41.701684 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-0e18c8286a555e33d3bc14da3455609ded3bea1f2a25ec3a59cf48dd339a128c-shm.mount: Deactivated successfully. Feb 12 19:21:41.701733 systemd[1]: var-lib-kubelet-pods-b64db31d\x2dd12a\x2d47be\x2d9728\x2d06a0de150be9-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2drq4w7.mount: Deactivated successfully. Feb 12 19:21:41.701781 systemd[1]: var-lib-kubelet-pods-849642ce\x2d03a5\x2d4154\x2dba6d\x2debabf8c717fc-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d849kx.mount: Deactivated successfully. Feb 12 19:21:41.867960 kubelet[2501]: I0212 19:21:41.867928 2501 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=849642ce-03a5-4154-ba6d-ebabf8c717fc path="/var/lib/kubelet/pods/849642ce-03a5-4154-ba6d-ebabf8c717fc/volumes" Feb 12 19:21:41.868561 kubelet[2501]: I0212 19:21:41.868541 2501 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=b64db31d-d12a-47be-9728-06a0de150be9 path="/var/lib/kubelet/pods/b64db31d-d12a-47be-9728-06a0de150be9/volumes" Feb 12 19:21:42.704691 sshd[4210]: pam_unix(sshd:session): session closed for user core Feb 12 19:21:42.707213 systemd[1]: sshd@21-10.200.20.31:22-10.200.12.6:52204.service: Deactivated successfully. Feb 12 19:21:42.707967 systemd[1]: session-24.scope: Deactivated successfully. Feb 12 19:21:42.708541 systemd-logind[1367]: Session 24 logged out. Waiting for processes to exit. Feb 12 19:21:42.709310 systemd-logind[1367]: Removed session 24. Feb 12 19:21:42.780506 systemd[1]: Started sshd@22-10.200.20.31:22-10.200.12.6:52210.service. Feb 12 19:21:43.209030 sshd[4374]: Accepted publickey for core from 10.200.12.6 port 52210 ssh2: RSA SHA256:SfUjs7MZHb4dm/nWS32sXF6T7NXqXtYp+5K7LVbLt6U Feb 12 19:21:43.210729 sshd[4374]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:21:43.215080 systemd[1]: Started session-25.scope. Feb 12 19:21:43.215896 systemd-logind[1367]: New session 25 of user core. Feb 12 19:21:44.312039 kubelet[2501]: I0212 19:21:44.312000 2501 topology_manager.go:210] "Topology Admit Handler" Feb 12 19:21:44.312389 kubelet[2501]: E0212 19:21:44.312070 2501 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="849642ce-03a5-4154-ba6d-ebabf8c717fc" containerName="mount-cgroup" Feb 12 19:21:44.312389 kubelet[2501]: E0212 19:21:44.312082 2501 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="849642ce-03a5-4154-ba6d-ebabf8c717fc" containerName="clean-cilium-state" Feb 12 19:21:44.312389 kubelet[2501]: E0212 19:21:44.312089 2501 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="849642ce-03a5-4154-ba6d-ebabf8c717fc" containerName="cilium-agent" Feb 12 19:21:44.312389 kubelet[2501]: E0212 19:21:44.312098 2501 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="b64db31d-d12a-47be-9728-06a0de150be9" containerName="cilium-operator" Feb 12 19:21:44.312389 kubelet[2501]: E0212 19:21:44.312104 2501 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="849642ce-03a5-4154-ba6d-ebabf8c717fc" containerName="apply-sysctl-overwrites" Feb 12 19:21:44.312389 kubelet[2501]: E0212 19:21:44.312111 2501 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="849642ce-03a5-4154-ba6d-ebabf8c717fc" containerName="mount-bpf-fs" Feb 12 19:21:44.312389 kubelet[2501]: I0212 19:21:44.312144 2501 memory_manager.go:346] "RemoveStaleState removing state" podUID="b64db31d-d12a-47be-9728-06a0de150be9" containerName="cilium-operator" Feb 12 19:21:44.312389 kubelet[2501]: I0212 19:21:44.312152 2501 memory_manager.go:346] "RemoveStaleState removing state" podUID="849642ce-03a5-4154-ba6d-ebabf8c717fc" containerName="cilium-agent" Feb 12 19:21:44.317202 systemd[1]: Created slice kubepods-burstable-pod85690038_7e61_4370_9d49_b8bf791be53d.slice. Feb 12 19:21:44.361370 sshd[4374]: pam_unix(sshd:session): session closed for user core Feb 12 19:21:44.365031 systemd[1]: sshd@22-10.200.20.31:22-10.200.12.6:52210.service: Deactivated successfully. Feb 12 19:21:44.365819 systemd[1]: session-25.scope: Deactivated successfully. Feb 12 19:21:44.366917 systemd-logind[1367]: Session 25 logged out. Waiting for processes to exit. Feb 12 19:21:44.367663 systemd-logind[1367]: Removed session 25. Feb 12 19:21:44.380586 kubelet[2501]: I0212 19:21:44.380556 2501 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/85690038-7e61-4370-9d49-b8bf791be53d-hostproc\") pod \"cilium-tzk8z\" (UID: \"85690038-7e61-4370-9d49-b8bf791be53d\") " pod="kube-system/cilium-tzk8z" Feb 12 19:21:44.380783 kubelet[2501]: I0212 19:21:44.380772 2501 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/85690038-7e61-4370-9d49-b8bf791be53d-hubble-tls\") pod \"cilium-tzk8z\" (UID: \"85690038-7e61-4370-9d49-b8bf791be53d\") " pod="kube-system/cilium-tzk8z" Feb 12 19:21:44.380898 kubelet[2501]: I0212 19:21:44.380887 2501 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/85690038-7e61-4370-9d49-b8bf791be53d-cilium-cgroup\") pod \"cilium-tzk8z\" (UID: \"85690038-7e61-4370-9d49-b8bf791be53d\") " pod="kube-system/cilium-tzk8z" Feb 12 19:21:44.380985 kubelet[2501]: I0212 19:21:44.380976 2501 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/85690038-7e61-4370-9d49-b8bf791be53d-cni-path\") pod \"cilium-tzk8z\" (UID: \"85690038-7e61-4370-9d49-b8bf791be53d\") " pod="kube-system/cilium-tzk8z" Feb 12 19:21:44.381091 kubelet[2501]: I0212 19:21:44.381066 2501 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/85690038-7e61-4370-9d49-b8bf791be53d-cilium-config-path\") pod \"cilium-tzk8z\" (UID: \"85690038-7e61-4370-9d49-b8bf791be53d\") " pod="kube-system/cilium-tzk8z" Feb 12 19:21:44.381144 kubelet[2501]: I0212 19:21:44.381105 2501 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/85690038-7e61-4370-9d49-b8bf791be53d-lib-modules\") pod \"cilium-tzk8z\" (UID: \"85690038-7e61-4370-9d49-b8bf791be53d\") " pod="kube-system/cilium-tzk8z" Feb 12 19:21:44.381144 kubelet[2501]: I0212 19:21:44.381128 2501 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/85690038-7e61-4370-9d49-b8bf791be53d-etc-cni-netd\") pod \"cilium-tzk8z\" (UID: \"85690038-7e61-4370-9d49-b8bf791be53d\") " pod="kube-system/cilium-tzk8z" Feb 12 19:21:44.381197 kubelet[2501]: I0212 19:21:44.381149 2501 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/85690038-7e61-4370-9d49-b8bf791be53d-bpf-maps\") pod \"cilium-tzk8z\" (UID: \"85690038-7e61-4370-9d49-b8bf791be53d\") " pod="kube-system/cilium-tzk8z" Feb 12 19:21:44.381197 kubelet[2501]: I0212 19:21:44.381170 2501 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/85690038-7e61-4370-9d49-b8bf791be53d-clustermesh-secrets\") pod \"cilium-tzk8z\" (UID: \"85690038-7e61-4370-9d49-b8bf791be53d\") " pod="kube-system/cilium-tzk8z" Feb 12 19:21:44.381197 kubelet[2501]: I0212 19:21:44.381188 2501 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/85690038-7e61-4370-9d49-b8bf791be53d-host-proc-sys-kernel\") pod \"cilium-tzk8z\" (UID: \"85690038-7e61-4370-9d49-b8bf791be53d\") " pod="kube-system/cilium-tzk8z" Feb 12 19:21:44.381263 kubelet[2501]: I0212 19:21:44.381208 2501 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/85690038-7e61-4370-9d49-b8bf791be53d-cilium-ipsec-secrets\") pod \"cilium-tzk8z\" (UID: \"85690038-7e61-4370-9d49-b8bf791be53d\") " pod="kube-system/cilium-tzk8z" Feb 12 19:21:44.381263 kubelet[2501]: I0212 19:21:44.381228 2501 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ps2rx\" (UniqueName: \"kubernetes.io/projected/85690038-7e61-4370-9d49-b8bf791be53d-kube-api-access-ps2rx\") pod \"cilium-tzk8z\" (UID: \"85690038-7e61-4370-9d49-b8bf791be53d\") " pod="kube-system/cilium-tzk8z" Feb 12 19:21:44.381263 kubelet[2501]: I0212 19:21:44.381249 2501 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/85690038-7e61-4370-9d49-b8bf791be53d-cilium-run\") pod \"cilium-tzk8z\" (UID: \"85690038-7e61-4370-9d49-b8bf791be53d\") " pod="kube-system/cilium-tzk8z" Feb 12 19:21:44.381331 kubelet[2501]: I0212 19:21:44.381268 2501 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/85690038-7e61-4370-9d49-b8bf791be53d-xtables-lock\") pod \"cilium-tzk8z\" (UID: \"85690038-7e61-4370-9d49-b8bf791be53d\") " pod="kube-system/cilium-tzk8z" Feb 12 19:21:44.381331 kubelet[2501]: I0212 19:21:44.381287 2501 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/85690038-7e61-4370-9d49-b8bf791be53d-host-proc-sys-net\") pod \"cilium-tzk8z\" (UID: \"85690038-7e61-4370-9d49-b8bf791be53d\") " pod="kube-system/cilium-tzk8z" Feb 12 19:21:44.432028 systemd[1]: Started sshd@23-10.200.20.31:22-10.200.12.6:52214.service. Feb 12 19:21:44.620746 env[1382]: time="2024-02-12T19:21:44.620627699Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-tzk8z,Uid:85690038-7e61-4370-9d49-b8bf791be53d,Namespace:kube-system,Attempt:0,}" Feb 12 19:21:44.720883 env[1382]: time="2024-02-12T19:21:44.720803056Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 12 19:21:44.721023 env[1382]: time="2024-02-12T19:21:44.720903257Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 12 19:21:44.721023 env[1382]: time="2024-02-12T19:21:44.720927898Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 12 19:21:44.721207 env[1382]: time="2024-02-12T19:21:44.721173782Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/9dd60c2db8aa91a832cff830b36202bd6ea9c3fef0900ca8494eb6f30658ee1b pid=4398 runtime=io.containerd.runc.v2 Feb 12 19:21:44.737487 systemd[1]: Started cri-containerd-9dd60c2db8aa91a832cff830b36202bd6ea9c3fef0900ca8494eb6f30658ee1b.scope. Feb 12 19:21:44.764832 env[1382]: time="2024-02-12T19:21:44.764792152Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-tzk8z,Uid:85690038-7e61-4370-9d49-b8bf791be53d,Namespace:kube-system,Attempt:0,} returns sandbox id \"9dd60c2db8aa91a832cff830b36202bd6ea9c3fef0900ca8494eb6f30658ee1b\"" Feb 12 19:21:44.771207 env[1382]: time="2024-02-12T19:21:44.771167979Z" level=info msg="CreateContainer within sandbox \"9dd60c2db8aa91a832cff830b36202bd6ea9c3fef0900ca8494eb6f30658ee1b\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 12 19:21:44.818553 env[1382]: time="2024-02-12T19:21:44.818476691Z" level=info msg="CreateContainer within sandbox \"9dd60c2db8aa91a832cff830b36202bd6ea9c3fef0900ca8494eb6f30658ee1b\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"d017b4400d9ae4f9e90590944e6dca934d0f061d0370b84f3139699b0509cf36\"" Feb 12 19:21:44.819405 env[1382]: time="2024-02-12T19:21:44.819378186Z" level=info msg="StartContainer for \"d017b4400d9ae4f9e90590944e6dca934d0f061d0370b84f3139699b0509cf36\"" Feb 12 19:21:44.834703 systemd[1]: Started cri-containerd-d017b4400d9ae4f9e90590944e6dca934d0f061d0370b84f3139699b0509cf36.scope. Feb 12 19:21:44.837750 sshd[4384]: Accepted publickey for core from 10.200.12.6 port 52214 ssh2: RSA SHA256:SfUjs7MZHb4dm/nWS32sXF6T7NXqXtYp+5K7LVbLt6U Feb 12 19:21:44.838905 sshd[4384]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:21:44.844402 systemd[1]: Started session-26.scope. Feb 12 19:21:44.845122 systemd-logind[1367]: New session 26 of user core. Feb 12 19:21:44.853408 systemd[1]: cri-containerd-d017b4400d9ae4f9e90590944e6dca934d0f061d0370b84f3139699b0509cf36.scope: Deactivated successfully. Feb 12 19:21:44.928217 env[1382]: time="2024-02-12T19:21:44.928096726Z" level=info msg="shim disconnected" id=d017b4400d9ae4f9e90590944e6dca934d0f061d0370b84f3139699b0509cf36 Feb 12 19:21:44.928431 env[1382]: time="2024-02-12T19:21:44.928411291Z" level=warning msg="cleaning up after shim disconnected" id=d017b4400d9ae4f9e90590944e6dca934d0f061d0370b84f3139699b0509cf36 namespace=k8s.io Feb 12 19:21:44.928516 env[1382]: time="2024-02-12T19:21:44.928481932Z" level=info msg="cleaning up dead shim" Feb 12 19:21:44.934974 env[1382]: time="2024-02-12T19:21:44.934929600Z" level=warning msg="cleanup warnings time=\"2024-02-12T19:21:44Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4460 runtime=io.containerd.runc.v2\ntime=\"2024-02-12T19:21:44Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/d017b4400d9ae4f9e90590944e6dca934d0f061d0370b84f3139699b0509cf36/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Feb 12 19:21:44.935447 env[1382]: time="2024-02-12T19:21:44.935345767Z" level=error msg="copy shim log" error="read /proc/self/fd/39: file already closed" Feb 12 19:21:44.935837 env[1382]: time="2024-02-12T19:21:44.935607371Z" level=error msg="Failed to pipe stdout of container \"d017b4400d9ae4f9e90590944e6dca934d0f061d0370b84f3139699b0509cf36\"" error="reading from a closed fifo" Feb 12 19:21:44.935940 env[1382]: time="2024-02-12T19:21:44.935607731Z" level=error msg="Failed to pipe stderr of container \"d017b4400d9ae4f9e90590944e6dca934d0f061d0370b84f3139699b0509cf36\"" error="reading from a closed fifo" Feb 12 19:21:44.963301 env[1382]: time="2024-02-12T19:21:44.963226714Z" level=error msg="StartContainer for \"d017b4400d9ae4f9e90590944e6dca934d0f061d0370b84f3139699b0509cf36\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Feb 12 19:21:44.963950 kubelet[2501]: E0212 19:21:44.963708 2501 remote_runtime.go:326] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="d017b4400d9ae4f9e90590944e6dca934d0f061d0370b84f3139699b0509cf36" Feb 12 19:21:44.963950 kubelet[2501]: E0212 19:21:44.963853 2501 kuberuntime_manager.go:872] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Feb 12 19:21:44.963950 kubelet[2501]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Feb 12 19:21:44.963950 kubelet[2501]: rm /hostbin/cilium-mount Feb 12 19:21:44.964172 kubelet[2501]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-ps2rx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod cilium-tzk8z_kube-system(85690038-7e61-4370-9d49-b8bf791be53d): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Feb 12 19:21:44.964252 kubelet[2501]: E0212 19:21:44.963927 2501 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-tzk8z" podUID=85690038-7e61-4370-9d49-b8bf791be53d Feb 12 19:21:44.989614 kubelet[2501]: E0212 19:21:44.989586 2501 kubelet.go:2475] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 12 19:21:45.201729 sshd[4384]: pam_unix(sshd:session): session closed for user core Feb 12 19:21:45.205041 systemd[1]: sshd@23-10.200.20.31:22-10.200.12.6:52214.service: Deactivated successfully. Feb 12 19:21:45.205268 systemd-logind[1367]: Session 26 logged out. Waiting for processes to exit. Feb 12 19:21:45.205813 systemd[1]: session-26.scope: Deactivated successfully. Feb 12 19:21:45.206591 systemd-logind[1367]: Removed session 26. Feb 12 19:21:45.268643 systemd[1]: Started sshd@24-10.200.20.31:22-10.200.12.6:52218.service. Feb 12 19:21:45.328056 env[1382]: time="2024-02-12T19:21:45.327992193Z" level=info msg="StopPodSandbox for \"9dd60c2db8aa91a832cff830b36202bd6ea9c3fef0900ca8494eb6f30658ee1b\"" Feb 12 19:21:45.328202 env[1382]: time="2024-02-12T19:21:45.328070514Z" level=info msg="Container to stop \"d017b4400d9ae4f9e90590944e6dca934d0f061d0370b84f3139699b0509cf36\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 12 19:21:45.346051 systemd[1]: cri-containerd-9dd60c2db8aa91a832cff830b36202bd6ea9c3fef0900ca8494eb6f30658ee1b.scope: Deactivated successfully. Feb 12 19:21:45.412970 env[1382]: time="2024-02-12T19:21:45.412912327Z" level=info msg="shim disconnected" id=9dd60c2db8aa91a832cff830b36202bd6ea9c3fef0900ca8494eb6f30658ee1b Feb 12 19:21:45.413527 env[1382]: time="2024-02-12T19:21:45.413484656Z" level=warning msg="cleaning up after shim disconnected" id=9dd60c2db8aa91a832cff830b36202bd6ea9c3fef0900ca8494eb6f30658ee1b namespace=k8s.io Feb 12 19:21:45.413619 env[1382]: time="2024-02-12T19:21:45.413606178Z" level=info msg="cleaning up dead shim" Feb 12 19:21:45.420573 env[1382]: time="2024-02-12T19:21:45.420529894Z" level=warning msg="cleanup warnings time=\"2024-02-12T19:21:45Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4503 runtime=io.containerd.runc.v2\n" Feb 12 19:21:45.421007 env[1382]: time="2024-02-12T19:21:45.420981981Z" level=info msg="TearDown network for sandbox \"9dd60c2db8aa91a832cff830b36202bd6ea9c3fef0900ca8494eb6f30658ee1b\" successfully" Feb 12 19:21:45.421095 env[1382]: time="2024-02-12T19:21:45.421079423Z" level=info msg="StopPodSandbox for \"9dd60c2db8aa91a832cff830b36202bd6ea9c3fef0900ca8494eb6f30658ee1b\" returns successfully" Feb 12 19:21:45.452419 kubelet[2501]: I0212 19:21:45.451960 2501 setters.go:548] "Node became not ready" node="ci-3510.3.2-a-e08ac1c56f" condition={Type:Ready Status:False LastHeartbeatTime:2024-02-12 19:21:45.451906616 +0000 UTC m=+255.758011065 LastTransitionTime:2024-02-12 19:21:45.451906616 +0000 UTC m=+255.758011065 Reason:KubeletNotReady Message:container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized} Feb 12 19:21:45.490554 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-9dd60c2db8aa91a832cff830b36202bd6ea9c3fef0900ca8494eb6f30658ee1b-shm.mount: Deactivated successfully. Feb 12 19:21:45.589312 kubelet[2501]: I0212 19:21:45.588883 2501 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/85690038-7e61-4370-9d49-b8bf791be53d-cni-path\") pod \"85690038-7e61-4370-9d49-b8bf791be53d\" (UID: \"85690038-7e61-4370-9d49-b8bf791be53d\") " Feb 12 19:21:45.589312 kubelet[2501]: I0212 19:21:45.588941 2501 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/85690038-7e61-4370-9d49-b8bf791be53d-clustermesh-secrets\") pod \"85690038-7e61-4370-9d49-b8bf791be53d\" (UID: \"85690038-7e61-4370-9d49-b8bf791be53d\") " Feb 12 19:21:45.589312 kubelet[2501]: I0212 19:21:45.588961 2501 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/85690038-7e61-4370-9d49-b8bf791be53d-lib-modules\") pod \"85690038-7e61-4370-9d49-b8bf791be53d\" (UID: \"85690038-7e61-4370-9d49-b8bf791be53d\") " Feb 12 19:21:45.589312 kubelet[2501]: I0212 19:21:45.588978 2501 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/85690038-7e61-4370-9d49-b8bf791be53d-cilium-cgroup\") pod \"85690038-7e61-4370-9d49-b8bf791be53d\" (UID: \"85690038-7e61-4370-9d49-b8bf791be53d\") " Feb 12 19:21:45.589312 kubelet[2501]: I0212 19:21:45.588983 2501 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/85690038-7e61-4370-9d49-b8bf791be53d-cni-path" (OuterVolumeSpecName: "cni-path") pod "85690038-7e61-4370-9d49-b8bf791be53d" (UID: "85690038-7e61-4370-9d49-b8bf791be53d"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 19:21:45.589312 kubelet[2501]: I0212 19:21:45.589010 2501 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ps2rx\" (UniqueName: \"kubernetes.io/projected/85690038-7e61-4370-9d49-b8bf791be53d-kube-api-access-ps2rx\") pod \"85690038-7e61-4370-9d49-b8bf791be53d\" (UID: \"85690038-7e61-4370-9d49-b8bf791be53d\") " Feb 12 19:21:45.589606 kubelet[2501]: I0212 19:21:45.589017 2501 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/85690038-7e61-4370-9d49-b8bf791be53d-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "85690038-7e61-4370-9d49-b8bf791be53d" (UID: "85690038-7e61-4370-9d49-b8bf791be53d"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 19:21:45.589606 kubelet[2501]: I0212 19:21:45.589028 2501 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/85690038-7e61-4370-9d49-b8bf791be53d-etc-cni-netd\") pod \"85690038-7e61-4370-9d49-b8bf791be53d\" (UID: \"85690038-7e61-4370-9d49-b8bf791be53d\") " Feb 12 19:21:45.589606 kubelet[2501]: I0212 19:21:45.589047 2501 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/85690038-7e61-4370-9d49-b8bf791be53d-host-proc-sys-kernel\") pod \"85690038-7e61-4370-9d49-b8bf791be53d\" (UID: \"85690038-7e61-4370-9d49-b8bf791be53d\") " Feb 12 19:21:45.589606 kubelet[2501]: I0212 19:21:45.589079 2501 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/85690038-7e61-4370-9d49-b8bf791be53d-cilium-run\") pod \"85690038-7e61-4370-9d49-b8bf791be53d\" (UID: \"85690038-7e61-4370-9d49-b8bf791be53d\") " Feb 12 19:21:45.589606 kubelet[2501]: I0212 19:21:45.589098 2501 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/85690038-7e61-4370-9d49-b8bf791be53d-hubble-tls\") pod \"85690038-7e61-4370-9d49-b8bf791be53d\" (UID: \"85690038-7e61-4370-9d49-b8bf791be53d\") " Feb 12 19:21:45.589606 kubelet[2501]: I0212 19:21:45.589115 2501 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/85690038-7e61-4370-9d49-b8bf791be53d-hostproc\") pod \"85690038-7e61-4370-9d49-b8bf791be53d\" (UID: \"85690038-7e61-4370-9d49-b8bf791be53d\") " Feb 12 19:21:45.589740 kubelet[2501]: I0212 19:21:45.589132 2501 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/85690038-7e61-4370-9d49-b8bf791be53d-bpf-maps\") pod \"85690038-7e61-4370-9d49-b8bf791be53d\" (UID: \"85690038-7e61-4370-9d49-b8bf791be53d\") " Feb 12 19:21:45.589740 kubelet[2501]: I0212 19:21:45.589163 2501 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/85690038-7e61-4370-9d49-b8bf791be53d-xtables-lock\") pod \"85690038-7e61-4370-9d49-b8bf791be53d\" (UID: \"85690038-7e61-4370-9d49-b8bf791be53d\") " Feb 12 19:21:45.589740 kubelet[2501]: I0212 19:21:45.589182 2501 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/85690038-7e61-4370-9d49-b8bf791be53d-host-proc-sys-net\") pod \"85690038-7e61-4370-9d49-b8bf791be53d\" (UID: \"85690038-7e61-4370-9d49-b8bf791be53d\") " Feb 12 19:21:45.589740 kubelet[2501]: I0212 19:21:45.589204 2501 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/85690038-7e61-4370-9d49-b8bf791be53d-cilium-config-path\") pod \"85690038-7e61-4370-9d49-b8bf791be53d\" (UID: \"85690038-7e61-4370-9d49-b8bf791be53d\") " Feb 12 19:21:45.589740 kubelet[2501]: I0212 19:21:45.589234 2501 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/85690038-7e61-4370-9d49-b8bf791be53d-cilium-ipsec-secrets\") pod \"85690038-7e61-4370-9d49-b8bf791be53d\" (UID: \"85690038-7e61-4370-9d49-b8bf791be53d\") " Feb 12 19:21:45.589740 kubelet[2501]: I0212 19:21:45.589267 2501 reconciler_common.go:295] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/85690038-7e61-4370-9d49-b8bf791be53d-cni-path\") on node \"ci-3510.3.2-a-e08ac1c56f\" DevicePath \"\"" Feb 12 19:21:45.589887 kubelet[2501]: I0212 19:21:45.589279 2501 reconciler_common.go:295] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/85690038-7e61-4370-9d49-b8bf791be53d-lib-modules\") on node \"ci-3510.3.2-a-e08ac1c56f\" DevicePath \"\"" Feb 12 19:21:45.590467 kubelet[2501]: I0212 19:21:45.590438 2501 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/85690038-7e61-4370-9d49-b8bf791be53d-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "85690038-7e61-4370-9d49-b8bf791be53d" (UID: "85690038-7e61-4370-9d49-b8bf791be53d"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 19:21:45.590803 kubelet[2501]: I0212 19:21:45.590786 2501 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/85690038-7e61-4370-9d49-b8bf791be53d-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "85690038-7e61-4370-9d49-b8bf791be53d" (UID: "85690038-7e61-4370-9d49-b8bf791be53d"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 19:21:45.590914 kubelet[2501]: I0212 19:21:45.590901 2501 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/85690038-7e61-4370-9d49-b8bf791be53d-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "85690038-7e61-4370-9d49-b8bf791be53d" (UID: "85690038-7e61-4370-9d49-b8bf791be53d"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 19:21:45.591007 kubelet[2501]: I0212 19:21:45.590996 2501 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/85690038-7e61-4370-9d49-b8bf791be53d-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "85690038-7e61-4370-9d49-b8bf791be53d" (UID: "85690038-7e61-4370-9d49-b8bf791be53d"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 19:21:45.591096 kubelet[2501]: I0212 19:21:45.591085 2501 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/85690038-7e61-4370-9d49-b8bf791be53d-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "85690038-7e61-4370-9d49-b8bf791be53d" (UID: "85690038-7e61-4370-9d49-b8bf791be53d"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 19:21:45.591181 kubelet[2501]: I0212 19:21:45.591169 2501 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/85690038-7e61-4370-9d49-b8bf791be53d-hostproc" (OuterVolumeSpecName: "hostproc") pod "85690038-7e61-4370-9d49-b8bf791be53d" (UID: "85690038-7e61-4370-9d49-b8bf791be53d"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 19:21:45.591327 kubelet[2501]: I0212 19:21:45.591314 2501 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/85690038-7e61-4370-9d49-b8bf791be53d-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "85690038-7e61-4370-9d49-b8bf791be53d" (UID: "85690038-7e61-4370-9d49-b8bf791be53d"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 19:21:45.591439 kubelet[2501]: I0212 19:21:45.591409 2501 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/85690038-7e61-4370-9d49-b8bf791be53d-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "85690038-7e61-4370-9d49-b8bf791be53d" (UID: "85690038-7e61-4370-9d49-b8bf791be53d"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 19:21:45.591682 kubelet[2501]: W0212 19:21:45.591649 2501 empty_dir.go:525] Warning: Failed to clear quota on /var/lib/kubelet/pods/85690038-7e61-4370-9d49-b8bf791be53d/volumes/kubernetes.io~configmap/cilium-config-path: clearQuota called, but quotas disabled Feb 12 19:21:45.594035 systemd[1]: var-lib-kubelet-pods-85690038\x2d7e61\x2d4370\x2d9d49\x2db8bf791be53d-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Feb 12 19:21:45.596512 systemd[1]: var-lib-kubelet-pods-85690038\x2d7e61\x2d4370\x2d9d49\x2db8bf791be53d-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Feb 12 19:21:45.598708 kubelet[2501]: I0212 19:21:45.598675 2501 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/85690038-7e61-4370-9d49-b8bf791be53d-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "85690038-7e61-4370-9d49-b8bf791be53d" (UID: "85690038-7e61-4370-9d49-b8bf791be53d"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 12 19:21:45.599093 kubelet[2501]: I0212 19:21:45.599070 2501 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/85690038-7e61-4370-9d49-b8bf791be53d-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "85690038-7e61-4370-9d49-b8bf791be53d" (UID: "85690038-7e61-4370-9d49-b8bf791be53d"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 12 19:21:45.599411 kubelet[2501]: I0212 19:21:45.599383 2501 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/85690038-7e61-4370-9d49-b8bf791be53d-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "85690038-7e61-4370-9d49-b8bf791be53d" (UID: "85690038-7e61-4370-9d49-b8bf791be53d"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 12 19:21:45.603931 kubelet[2501]: I0212 19:21:45.601280 2501 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/85690038-7e61-4370-9d49-b8bf791be53d-kube-api-access-ps2rx" (OuterVolumeSpecName: "kube-api-access-ps2rx") pod "85690038-7e61-4370-9d49-b8bf791be53d" (UID: "85690038-7e61-4370-9d49-b8bf791be53d"). InnerVolumeSpecName "kube-api-access-ps2rx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 12 19:21:45.602024 systemd[1]: var-lib-kubelet-pods-85690038\x2d7e61\x2d4370\x2d9d49\x2db8bf791be53d-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dps2rx.mount: Deactivated successfully. Feb 12 19:21:45.602119 systemd[1]: var-lib-kubelet-pods-85690038\x2d7e61\x2d4370\x2d9d49\x2db8bf791be53d-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Feb 12 19:21:45.604703 kubelet[2501]: I0212 19:21:45.604669 2501 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/85690038-7e61-4370-9d49-b8bf791be53d-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "85690038-7e61-4370-9d49-b8bf791be53d" (UID: "85690038-7e61-4370-9d49-b8bf791be53d"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 12 19:21:45.667714 sshd[4482]: Accepted publickey for core from 10.200.12.6 port 52218 ssh2: RSA SHA256:SfUjs7MZHb4dm/nWS32sXF6T7NXqXtYp+5K7LVbLt6U Feb 12 19:21:45.669142 sshd[4482]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:21:45.673078 systemd-logind[1367]: New session 27 of user core. Feb 12 19:21:45.673572 systemd[1]: Started session-27.scope. Feb 12 19:21:45.689819 kubelet[2501]: I0212 19:21:45.689792 2501 reconciler_common.go:295] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/85690038-7e61-4370-9d49-b8bf791be53d-hubble-tls\") on node \"ci-3510.3.2-a-e08ac1c56f\" DevicePath \"\"" Feb 12 19:21:45.690008 kubelet[2501]: I0212 19:21:45.689994 2501 reconciler_common.go:295] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/85690038-7e61-4370-9d49-b8bf791be53d-host-proc-sys-kernel\") on node \"ci-3510.3.2-a-e08ac1c56f\" DevicePath \"\"" Feb 12 19:21:45.690071 kubelet[2501]: I0212 19:21:45.690063 2501 reconciler_common.go:295] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/85690038-7e61-4370-9d49-b8bf791be53d-cilium-run\") on node \"ci-3510.3.2-a-e08ac1c56f\" DevicePath \"\"" Feb 12 19:21:45.690130 kubelet[2501]: I0212 19:21:45.690122 2501 reconciler_common.go:295] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/85690038-7e61-4370-9d49-b8bf791be53d-hostproc\") on node \"ci-3510.3.2-a-e08ac1c56f\" DevicePath \"\"" Feb 12 19:21:45.690197 kubelet[2501]: I0212 19:21:45.690179 2501 reconciler_common.go:295] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/85690038-7e61-4370-9d49-b8bf791be53d-bpf-maps\") on node \"ci-3510.3.2-a-e08ac1c56f\" DevicePath \"\"" Feb 12 19:21:45.690269 kubelet[2501]: I0212 19:21:45.690261 2501 reconciler_common.go:295] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/85690038-7e61-4370-9d49-b8bf791be53d-xtables-lock\") on node \"ci-3510.3.2-a-e08ac1c56f\" DevicePath \"\"" Feb 12 19:21:45.690326 kubelet[2501]: I0212 19:21:45.690319 2501 reconciler_common.go:295] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/85690038-7e61-4370-9d49-b8bf791be53d-cilium-ipsec-secrets\") on node \"ci-3510.3.2-a-e08ac1c56f\" DevicePath \"\"" Feb 12 19:21:45.690386 kubelet[2501]: I0212 19:21:45.690375 2501 reconciler_common.go:295] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/85690038-7e61-4370-9d49-b8bf791be53d-host-proc-sys-net\") on node \"ci-3510.3.2-a-e08ac1c56f\" DevicePath \"\"" Feb 12 19:21:45.690444 kubelet[2501]: I0212 19:21:45.690435 2501 reconciler_common.go:295] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/85690038-7e61-4370-9d49-b8bf791be53d-cilium-config-path\") on node \"ci-3510.3.2-a-e08ac1c56f\" DevicePath \"\"" Feb 12 19:21:45.690529 kubelet[2501]: I0212 19:21:45.690491 2501 reconciler_common.go:295] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/85690038-7e61-4370-9d49-b8bf791be53d-clustermesh-secrets\") on node \"ci-3510.3.2-a-e08ac1c56f\" DevicePath \"\"" Feb 12 19:21:45.690599 kubelet[2501]: I0212 19:21:45.690590 2501 reconciler_common.go:295] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/85690038-7e61-4370-9d49-b8bf791be53d-cilium-cgroup\") on node \"ci-3510.3.2-a-e08ac1c56f\" DevicePath \"\"" Feb 12 19:21:45.690676 kubelet[2501]: I0212 19:21:45.690668 2501 reconciler_common.go:295] "Volume detached for volume \"kube-api-access-ps2rx\" (UniqueName: \"kubernetes.io/projected/85690038-7e61-4370-9d49-b8bf791be53d-kube-api-access-ps2rx\") on node \"ci-3510.3.2-a-e08ac1c56f\" DevicePath \"\"" Feb 12 19:21:45.690735 kubelet[2501]: I0212 19:21:45.690726 2501 reconciler_common.go:295] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/85690038-7e61-4370-9d49-b8bf791be53d-etc-cni-netd\") on node \"ci-3510.3.2-a-e08ac1c56f\" DevicePath \"\"" Feb 12 19:21:45.869835 systemd[1]: Removed slice kubepods-burstable-pod85690038_7e61_4370_9d49_b8bf791be53d.slice. Feb 12 19:21:46.329895 kubelet[2501]: I0212 19:21:46.329862 2501 scope.go:115] "RemoveContainer" containerID="d017b4400d9ae4f9e90590944e6dca934d0f061d0370b84f3139699b0509cf36" Feb 12 19:21:46.332023 env[1382]: time="2024-02-12T19:21:46.331940687Z" level=info msg="RemoveContainer for \"d017b4400d9ae4f9e90590944e6dca934d0f061d0370b84f3139699b0509cf36\"" Feb 12 19:21:46.342527 env[1382]: time="2024-02-12T19:21:46.342449061Z" level=info msg="RemoveContainer for \"d017b4400d9ae4f9e90590944e6dca934d0f061d0370b84f3139699b0509cf36\" returns successfully" Feb 12 19:21:46.360366 kubelet[2501]: I0212 19:21:46.360326 2501 topology_manager.go:210] "Topology Admit Handler" Feb 12 19:21:46.360594 kubelet[2501]: E0212 19:21:46.360582 2501 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="85690038-7e61-4370-9d49-b8bf791be53d" containerName="mount-cgroup" Feb 12 19:21:46.360713 kubelet[2501]: I0212 19:21:46.360703 2501 memory_manager.go:346] "RemoveStaleState removing state" podUID="85690038-7e61-4370-9d49-b8bf791be53d" containerName="mount-cgroup" Feb 12 19:21:46.365992 systemd[1]: Created slice kubepods-burstable-podc9337b81_4a3e_43bc_a67c_e20dad8ed175.slice. Feb 12 19:21:46.394038 kubelet[2501]: I0212 19:21:46.393999 2501 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c9337b81-4a3e-43bc-a67c-e20dad8ed175-cilium-config-path\") pod \"cilium-gp7x9\" (UID: \"c9337b81-4a3e-43bc-a67c-e20dad8ed175\") " pod="kube-system/cilium-gp7x9" Feb 12 19:21:46.394314 kubelet[2501]: I0212 19:21:46.394292 2501 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c9337b81-4a3e-43bc-a67c-e20dad8ed175-lib-modules\") pod \"cilium-gp7x9\" (UID: \"c9337b81-4a3e-43bc-a67c-e20dad8ed175\") " pod="kube-system/cilium-gp7x9" Feb 12 19:21:46.394433 kubelet[2501]: I0212 19:21:46.394420 2501 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/c9337b81-4a3e-43bc-a67c-e20dad8ed175-cilium-ipsec-secrets\") pod \"cilium-gp7x9\" (UID: \"c9337b81-4a3e-43bc-a67c-e20dad8ed175\") " pod="kube-system/cilium-gp7x9" Feb 12 19:21:46.394552 kubelet[2501]: I0212 19:21:46.394538 2501 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/c9337b81-4a3e-43bc-a67c-e20dad8ed175-bpf-maps\") pod \"cilium-gp7x9\" (UID: \"c9337b81-4a3e-43bc-a67c-e20dad8ed175\") " pod="kube-system/cilium-gp7x9" Feb 12 19:21:46.394662 kubelet[2501]: I0212 19:21:46.394651 2501 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/c9337b81-4a3e-43bc-a67c-e20dad8ed175-cni-path\") pod \"cilium-gp7x9\" (UID: \"c9337b81-4a3e-43bc-a67c-e20dad8ed175\") " pod="kube-system/cilium-gp7x9" Feb 12 19:21:46.394754 kubelet[2501]: I0212 19:21:46.394744 2501 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x7c4p\" (UniqueName: \"kubernetes.io/projected/c9337b81-4a3e-43bc-a67c-e20dad8ed175-kube-api-access-x7c4p\") pod \"cilium-gp7x9\" (UID: \"c9337b81-4a3e-43bc-a67c-e20dad8ed175\") " pod="kube-system/cilium-gp7x9" Feb 12 19:21:46.394856 kubelet[2501]: I0212 19:21:46.394846 2501 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/c9337b81-4a3e-43bc-a67c-e20dad8ed175-clustermesh-secrets\") pod \"cilium-gp7x9\" (UID: \"c9337b81-4a3e-43bc-a67c-e20dad8ed175\") " pod="kube-system/cilium-gp7x9" Feb 12 19:21:46.394950 kubelet[2501]: I0212 19:21:46.394940 2501 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c9337b81-4a3e-43bc-a67c-e20dad8ed175-xtables-lock\") pod \"cilium-gp7x9\" (UID: \"c9337b81-4a3e-43bc-a67c-e20dad8ed175\") " pod="kube-system/cilium-gp7x9" Feb 12 19:21:46.395047 kubelet[2501]: I0212 19:21:46.395037 2501 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/c9337b81-4a3e-43bc-a67c-e20dad8ed175-cilium-run\") pod \"cilium-gp7x9\" (UID: \"c9337b81-4a3e-43bc-a67c-e20dad8ed175\") " pod="kube-system/cilium-gp7x9" Feb 12 19:21:46.395145 kubelet[2501]: I0212 19:21:46.395135 2501 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/c9337b81-4a3e-43bc-a67c-e20dad8ed175-hostproc\") pod \"cilium-gp7x9\" (UID: \"c9337b81-4a3e-43bc-a67c-e20dad8ed175\") " pod="kube-system/cilium-gp7x9" Feb 12 19:21:46.395244 kubelet[2501]: I0212 19:21:46.395234 2501 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c9337b81-4a3e-43bc-a67c-e20dad8ed175-etc-cni-netd\") pod \"cilium-gp7x9\" (UID: \"c9337b81-4a3e-43bc-a67c-e20dad8ed175\") " pod="kube-system/cilium-gp7x9" Feb 12 19:21:46.395347 kubelet[2501]: I0212 19:21:46.395337 2501 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/c9337b81-4a3e-43bc-a67c-e20dad8ed175-host-proc-sys-kernel\") pod \"cilium-gp7x9\" (UID: \"c9337b81-4a3e-43bc-a67c-e20dad8ed175\") " pod="kube-system/cilium-gp7x9" Feb 12 19:21:46.395451 kubelet[2501]: I0212 19:21:46.395441 2501 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/c9337b81-4a3e-43bc-a67c-e20dad8ed175-cilium-cgroup\") pod \"cilium-gp7x9\" (UID: \"c9337b81-4a3e-43bc-a67c-e20dad8ed175\") " pod="kube-system/cilium-gp7x9" Feb 12 19:21:46.395550 kubelet[2501]: I0212 19:21:46.395540 2501 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/c9337b81-4a3e-43bc-a67c-e20dad8ed175-host-proc-sys-net\") pod \"cilium-gp7x9\" (UID: \"c9337b81-4a3e-43bc-a67c-e20dad8ed175\") " pod="kube-system/cilium-gp7x9" Feb 12 19:21:46.395645 kubelet[2501]: I0212 19:21:46.395636 2501 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/c9337b81-4a3e-43bc-a67c-e20dad8ed175-hubble-tls\") pod \"cilium-gp7x9\" (UID: \"c9337b81-4a3e-43bc-a67c-e20dad8ed175\") " pod="kube-system/cilium-gp7x9" Feb 12 19:21:46.669244 env[1382]: time="2024-02-12T19:21:46.669115234Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-gp7x9,Uid:c9337b81-4a3e-43bc-a67c-e20dad8ed175,Namespace:kube-system,Attempt:0,}" Feb 12 19:21:46.730200 env[1382]: time="2024-02-12T19:21:46.730126406Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 12 19:21:46.730347 env[1382]: time="2024-02-12T19:21:46.730204807Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 12 19:21:46.730347 env[1382]: time="2024-02-12T19:21:46.730231967Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 12 19:21:46.730471 env[1382]: time="2024-02-12T19:21:46.730432731Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/8890657ae4839427cba997a932fc68f882e471208c90b809c34f141f501f03c1 pid=4539 runtime=io.containerd.runc.v2 Feb 12 19:21:46.739909 systemd[1]: Started cri-containerd-8890657ae4839427cba997a932fc68f882e471208c90b809c34f141f501f03c1.scope. Feb 12 19:21:46.763321 env[1382]: time="2024-02-12T19:21:46.763272235Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-gp7x9,Uid:c9337b81-4a3e-43bc-a67c-e20dad8ed175,Namespace:kube-system,Attempt:0,} returns sandbox id \"8890657ae4839427cba997a932fc68f882e471208c90b809c34f141f501f03c1\"" Feb 12 19:21:46.767566 env[1382]: time="2024-02-12T19:21:46.767375863Z" level=info msg="CreateContainer within sandbox \"8890657ae4839427cba997a932fc68f882e471208c90b809c34f141f501f03c1\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 12 19:21:46.813258 env[1382]: time="2024-02-12T19:21:46.813210062Z" level=info msg="CreateContainer within sandbox \"8890657ae4839427cba997a932fc68f882e471208c90b809c34f141f501f03c1\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"db783d711059787e2460882e3f7ac7f58b70fb93d4ed715a3e3898a5feb76c11\"" Feb 12 19:21:46.815470 env[1382]: time="2024-02-12T19:21:46.814047156Z" level=info msg="StartContainer for \"db783d711059787e2460882e3f7ac7f58b70fb93d4ed715a3e3898a5feb76c11\"" Feb 12 19:21:46.828156 systemd[1]: Started cri-containerd-db783d711059787e2460882e3f7ac7f58b70fb93d4ed715a3e3898a5feb76c11.scope. Feb 12 19:21:46.854380 env[1382]: time="2024-02-12T19:21:46.854334264Z" level=info msg="StartContainer for \"db783d711059787e2460882e3f7ac7f58b70fb93d4ed715a3e3898a5feb76c11\" returns successfully" Feb 12 19:21:46.859088 systemd[1]: cri-containerd-db783d711059787e2460882e3f7ac7f58b70fb93d4ed715a3e3898a5feb76c11.scope: Deactivated successfully. Feb 12 19:21:46.908137 env[1382]: time="2024-02-12T19:21:46.908092075Z" level=info msg="shim disconnected" id=db783d711059787e2460882e3f7ac7f58b70fb93d4ed715a3e3898a5feb76c11 Feb 12 19:21:46.908421 env[1382]: time="2024-02-12T19:21:46.908403360Z" level=warning msg="cleaning up after shim disconnected" id=db783d711059787e2460882e3f7ac7f58b70fb93d4ed715a3e3898a5feb76c11 namespace=k8s.io Feb 12 19:21:46.908529 env[1382]: time="2024-02-12T19:21:46.908486001Z" level=info msg="cleaning up dead shim" Feb 12 19:21:46.916347 env[1382]: time="2024-02-12T19:21:46.916308171Z" level=warning msg="cleanup warnings time=\"2024-02-12T19:21:46Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4622 runtime=io.containerd.runc.v2\n" Feb 12 19:21:47.335066 env[1382]: time="2024-02-12T19:21:47.335025163Z" level=info msg="CreateContainer within sandbox \"8890657ae4839427cba997a932fc68f882e471208c90b809c34f141f501f03c1\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 12 19:21:47.376621 env[1382]: time="2024-02-12T19:21:47.376566368Z" level=info msg="CreateContainer within sandbox \"8890657ae4839427cba997a932fc68f882e471208c90b809c34f141f501f03c1\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"39e169f78090b2787db8c8ad70b01008ece33b07f09f761bd75a9492deeb81ae\"" Feb 12 19:21:47.377372 env[1382]: time="2024-02-12T19:21:47.377346141Z" level=info msg="StartContainer for \"39e169f78090b2787db8c8ad70b01008ece33b07f09f761bd75a9492deeb81ae\"" Feb 12 19:21:47.391190 systemd[1]: Started cri-containerd-39e169f78090b2787db8c8ad70b01008ece33b07f09f761bd75a9492deeb81ae.scope. Feb 12 19:21:47.417752 env[1382]: time="2024-02-12T19:21:47.417692966Z" level=info msg="StartContainer for \"39e169f78090b2787db8c8ad70b01008ece33b07f09f761bd75a9492deeb81ae\" returns successfully" Feb 12 19:21:47.424100 systemd[1]: cri-containerd-39e169f78090b2787db8c8ad70b01008ece33b07f09f761bd75a9492deeb81ae.scope: Deactivated successfully. Feb 12 19:21:47.466305 env[1382]: time="2024-02-12T19:21:47.466259647Z" level=info msg="shim disconnected" id=39e169f78090b2787db8c8ad70b01008ece33b07f09f761bd75a9492deeb81ae Feb 12 19:21:47.466556 env[1382]: time="2024-02-12T19:21:47.466536531Z" level=warning msg="cleaning up after shim disconnected" id=39e169f78090b2787db8c8ad70b01008ece33b07f09f761bd75a9492deeb81ae namespace=k8s.io Feb 12 19:21:47.466624 env[1382]: time="2024-02-12T19:21:47.466612133Z" level=info msg="cleaning up dead shim" Feb 12 19:21:47.473394 env[1382]: time="2024-02-12T19:21:47.473353084Z" level=warning msg="cleanup warnings time=\"2024-02-12T19:21:47Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4683 runtime=io.containerd.runc.v2\n" Feb 12 19:21:47.868242 kubelet[2501]: I0212 19:21:47.868200 2501 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=85690038-7e61-4370-9d49-b8bf791be53d path="/var/lib/kubelet/pods/85690038-7e61-4370-9d49-b8bf791be53d/volumes" Feb 12 19:21:48.033526 kubelet[2501]: W0212 19:21:48.033254 2501 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod85690038_7e61_4370_9d49_b8bf791be53d.slice/cri-containerd-d017b4400d9ae4f9e90590944e6dca934d0f061d0370b84f3139699b0509cf36.scope WatchSource:0}: container "d017b4400d9ae4f9e90590944e6dca934d0f061d0370b84f3139699b0509cf36" in namespace "k8s.io": not found Feb 12 19:21:48.343788 env[1382]: time="2024-02-12T19:21:48.343726009Z" level=info msg="CreateContainer within sandbox \"8890657ae4839427cba997a932fc68f882e471208c90b809c34f141f501f03c1\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 12 19:21:48.398879 env[1382]: time="2024-02-12T19:21:48.398831513Z" level=info msg="CreateContainer within sandbox \"8890657ae4839427cba997a932fc68f882e471208c90b809c34f141f501f03c1\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"79e698992f8417fdd2bbef452af55ee8dca6b5c1ebd0fefb2ff72157bfdf62f0\"" Feb 12 19:21:48.399860 env[1382]: time="2024-02-12T19:21:48.399835250Z" level=info msg="StartContainer for \"79e698992f8417fdd2bbef452af55ee8dca6b5c1ebd0fefb2ff72157bfdf62f0\"" Feb 12 19:21:48.419554 systemd[1]: Started cri-containerd-79e698992f8417fdd2bbef452af55ee8dca6b5c1ebd0fefb2ff72157bfdf62f0.scope. Feb 12 19:21:48.447506 systemd[1]: cri-containerd-79e698992f8417fdd2bbef452af55ee8dca6b5c1ebd0fefb2ff72157bfdf62f0.scope: Deactivated successfully. Feb 12 19:21:48.456394 env[1382]: time="2024-02-12T19:21:48.456349497Z" level=info msg="StartContainer for \"79e698992f8417fdd2bbef452af55ee8dca6b5c1ebd0fefb2ff72157bfdf62f0\" returns successfully" Feb 12 19:21:48.502402 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-79e698992f8417fdd2bbef452af55ee8dca6b5c1ebd0fefb2ff72157bfdf62f0-rootfs.mount: Deactivated successfully. Feb 12 19:21:48.504035 env[1382]: time="2024-02-12T19:21:48.503991719Z" level=info msg="shim disconnected" id=79e698992f8417fdd2bbef452af55ee8dca6b5c1ebd0fefb2ff72157bfdf62f0 Feb 12 19:21:48.504216 env[1382]: time="2024-02-12T19:21:48.504197882Z" level=warning msg="cleaning up after shim disconnected" id=79e698992f8417fdd2bbef452af55ee8dca6b5c1ebd0fefb2ff72157bfdf62f0 namespace=k8s.io Feb 12 19:21:48.504299 env[1382]: time="2024-02-12T19:21:48.504286284Z" level=info msg="cleaning up dead shim" Feb 12 19:21:48.511434 env[1382]: time="2024-02-12T19:21:48.511391520Z" level=warning msg="cleanup warnings time=\"2024-02-12T19:21:48Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4740 runtime=io.containerd.runc.v2\n" Feb 12 19:21:49.347097 env[1382]: time="2024-02-12T19:21:49.347025845Z" level=info msg="CreateContainer within sandbox \"8890657ae4839427cba997a932fc68f882e471208c90b809c34f141f501f03c1\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 12 19:21:49.384619 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount61019465.mount: Deactivated successfully. Feb 12 19:21:49.390242 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3123957344.mount: Deactivated successfully. Feb 12 19:21:49.409644 env[1382]: time="2024-02-12T19:21:49.409579946Z" level=info msg="CreateContainer within sandbox \"8890657ae4839427cba997a932fc68f882e471208c90b809c34f141f501f03c1\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"f12f1f9fe4d9adcf7d2f780449d4d69ac2d1785d7b9196c0f97c3163f320f37b\"" Feb 12 19:21:49.410300 env[1382]: time="2024-02-12T19:21:49.410276478Z" level=info msg="StartContainer for \"f12f1f9fe4d9adcf7d2f780449d4d69ac2d1785d7b9196c0f97c3163f320f37b\"" Feb 12 19:21:49.427238 systemd[1]: Started cri-containerd-f12f1f9fe4d9adcf7d2f780449d4d69ac2d1785d7b9196c0f97c3163f320f37b.scope. Feb 12 19:21:49.451778 systemd[1]: cri-containerd-f12f1f9fe4d9adcf7d2f780449d4d69ac2d1785d7b9196c0f97c3163f320f37b.scope: Deactivated successfully. Feb 12 19:21:49.454686 env[1382]: time="2024-02-12T19:21:49.454640522Z" level=info msg="StartContainer for \"f12f1f9fe4d9adcf7d2f780449d4d69ac2d1785d7b9196c0f97c3163f320f37b\" returns successfully" Feb 12 19:21:49.484633 env[1382]: time="2024-02-12T19:21:49.484583691Z" level=info msg="shim disconnected" id=f12f1f9fe4d9adcf7d2f780449d4d69ac2d1785d7b9196c0f97c3163f320f37b Feb 12 19:21:49.484633 env[1382]: time="2024-02-12T19:21:49.484631172Z" level=warning msg="cleaning up after shim disconnected" id=f12f1f9fe4d9adcf7d2f780449d4d69ac2d1785d7b9196c0f97c3163f320f37b namespace=k8s.io Feb 12 19:21:49.484633 env[1382]: time="2024-02-12T19:21:49.484640332Z" level=info msg="cleaning up dead shim" Feb 12 19:21:49.491061 env[1382]: time="2024-02-12T19:21:49.491011676Z" level=warning msg="cleanup warnings time=\"2024-02-12T19:21:49Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4795 runtime=io.containerd.runc.v2\n" Feb 12 19:21:49.990430 kubelet[2501]: E0212 19:21:49.990403 2501 kubelet.go:2475] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 12 19:21:50.352104 env[1382]: time="2024-02-12T19:21:50.351395339Z" level=info msg="CreateContainer within sandbox \"8890657ae4839427cba997a932fc68f882e471208c90b809c34f141f501f03c1\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 12 19:21:50.400718 env[1382]: time="2024-02-12T19:21:50.400666659Z" level=info msg="CreateContainer within sandbox \"8890657ae4839427cba997a932fc68f882e471208c90b809c34f141f501f03c1\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"6ec92de439a6ef4d8b49f43a3be03438821841dfd160583a978de49ca8720ac7\"" Feb 12 19:21:50.401631 env[1382]: time="2024-02-12T19:21:50.401597635Z" level=info msg="StartContainer for \"6ec92de439a6ef4d8b49f43a3be03438821841dfd160583a978de49ca8720ac7\"" Feb 12 19:21:50.421342 systemd[1]: Started cri-containerd-6ec92de439a6ef4d8b49f43a3be03438821841dfd160583a978de49ca8720ac7.scope. Feb 12 19:21:50.457295 env[1382]: time="2024-02-12T19:21:50.457236099Z" level=info msg="StartContainer for \"6ec92de439a6ef4d8b49f43a3be03438821841dfd160583a978de49ca8720ac7\" returns successfully" Feb 12 19:21:50.866902 kubelet[2501]: E0212 19:21:50.865161 2501 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-787d4945fb-r5sdj" podUID=f16aae47-36df-4b20-bf9c-41b8494bf3a0 Feb 12 19:21:50.914779 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106(gcm-aes-ce))) Feb 12 19:21:51.147152 kubelet[2501]: W0212 19:21:51.147039 2501 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc9337b81_4a3e_43bc_a67c_e20dad8ed175.slice/cri-containerd-db783d711059787e2460882e3f7ac7f58b70fb93d4ed715a3e3898a5feb76c11.scope WatchSource:0}: task db783d711059787e2460882e3f7ac7f58b70fb93d4ed715a3e3898a5feb76c11 not found: not found Feb 12 19:21:52.106468 systemd[1]: run-containerd-runc-k8s.io-6ec92de439a6ef4d8b49f43a3be03438821841dfd160583a978de49ca8720ac7-runc.Fz7x1m.mount: Deactivated successfully. Feb 12 19:21:52.864546 kubelet[2501]: E0212 19:21:52.864515 2501 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-787d4945fb-r5sdj" podUID=f16aae47-36df-4b20-bf9c-41b8494bf3a0 Feb 12 19:21:53.434817 systemd-networkd[1527]: lxc_health: Link UP Feb 12 19:21:53.452246 systemd-networkd[1527]: lxc_health: Gained carrier Feb 12 19:21:53.452742 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Feb 12 19:21:54.236072 systemd[1]: run-containerd-runc-k8s.io-6ec92de439a6ef4d8b49f43a3be03438821841dfd160583a978de49ca8720ac7-runc.QImNcR.mount: Deactivated successfully. Feb 12 19:21:54.253568 kubelet[2501]: W0212 19:21:54.253408 2501 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc9337b81_4a3e_43bc_a67c_e20dad8ed175.slice/cri-containerd-39e169f78090b2787db8c8ad70b01008ece33b07f09f761bd75a9492deeb81ae.scope WatchSource:0}: task 39e169f78090b2787db8c8ad70b01008ece33b07f09f761bd75a9492deeb81ae not found: not found Feb 12 19:21:54.685707 kubelet[2501]: I0212 19:21:54.685596 2501 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-gp7x9" podStartSLOduration=8.685562032 pod.CreationTimestamp="2024-02-12 19:21:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-12 19:21:51.367592545 +0000 UTC m=+261.673697034" watchObservedRunningTime="2024-02-12 19:21:54.685562032 +0000 UTC m=+264.991666521" Feb 12 19:21:54.864176 kubelet[2501]: E0212 19:21:54.864137 2501 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-787d4945fb-r5sdj" podUID=f16aae47-36df-4b20-bf9c-41b8494bf3a0 Feb 12 19:21:54.900641 systemd-networkd[1527]: lxc_health: Gained IPv6LL Feb 12 19:21:56.411031 systemd[1]: run-containerd-runc-k8s.io-6ec92de439a6ef4d8b49f43a3be03438821841dfd160583a978de49ca8720ac7-runc.67uhoJ.mount: Deactivated successfully. Feb 12 19:21:57.362161 kubelet[2501]: W0212 19:21:57.362094 2501 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc9337b81_4a3e_43bc_a67c_e20dad8ed175.slice/cri-containerd-79e698992f8417fdd2bbef452af55ee8dca6b5c1ebd0fefb2ff72157bfdf62f0.scope WatchSource:0}: task 79e698992f8417fdd2bbef452af55ee8dca6b5c1ebd0fefb2ff72157bfdf62f0 not found: not found Feb 12 19:21:58.548365 systemd[1]: run-containerd-runc-k8s.io-6ec92de439a6ef4d8b49f43a3be03438821841dfd160583a978de49ca8720ac7-runc.zxEd66.mount: Deactivated successfully. Feb 12 19:22:00.473255 kubelet[2501]: W0212 19:22:00.473219 2501 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc9337b81_4a3e_43bc_a67c_e20dad8ed175.slice/cri-containerd-f12f1f9fe4d9adcf7d2f780449d4d69ac2d1785d7b9196c0f97c3163f320f37b.scope WatchSource:0}: task f12f1f9fe4d9adcf7d2f780449d4d69ac2d1785d7b9196c0f97c3163f320f37b not found: not found Feb 12 19:22:00.692761 systemd[1]: run-containerd-runc-k8s.io-6ec92de439a6ef4d8b49f43a3be03438821841dfd160583a978de49ca8720ac7-runc.8TIxjL.mount: Deactivated successfully. Feb 12 19:22:02.828762 systemd[1]: run-containerd-runc-k8s.io-6ec92de439a6ef4d8b49f43a3be03438821841dfd160583a978de49ca8720ac7-runc.USoBuJ.mount: Deactivated successfully. Feb 12 19:22:02.962014 sshd[4482]: pam_unix(sshd:session): session closed for user core Feb 12 19:22:02.964951 systemd[1]: sshd@24-10.200.20.31:22-10.200.12.6:52218.service: Deactivated successfully. Feb 12 19:22:02.965722 systemd[1]: session-27.scope: Deactivated successfully. Feb 12 19:22:02.966341 systemd-logind[1367]: Session 27 logged out. Waiting for processes to exit. Feb 12 19:22:02.967108 systemd-logind[1367]: Removed session 27. Feb 12 19:22:19.244709 systemd[1]: cri-containerd-8f04cac450aa8e5d51b207a24f171382f89e28ea5c03a25f20437a757cf46a2f.scope: Deactivated successfully. Feb 12 19:22:19.245009 systemd[1]: cri-containerd-8f04cac450aa8e5d51b207a24f171382f89e28ea5c03a25f20437a757cf46a2f.scope: Consumed 3.730s CPU time. Feb 12 19:22:19.265043 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8f04cac450aa8e5d51b207a24f171382f89e28ea5c03a25f20437a757cf46a2f-rootfs.mount: Deactivated successfully. Feb 12 19:22:19.305604 env[1382]: time="2024-02-12T19:22:19.305558469Z" level=info msg="shim disconnected" id=8f04cac450aa8e5d51b207a24f171382f89e28ea5c03a25f20437a757cf46a2f Feb 12 19:22:19.306106 env[1382]: time="2024-02-12T19:22:19.306076756Z" level=warning msg="cleaning up after shim disconnected" id=8f04cac450aa8e5d51b207a24f171382f89e28ea5c03a25f20437a757cf46a2f namespace=k8s.io Feb 12 19:22:19.306202 env[1382]: time="2024-02-12T19:22:19.306187918Z" level=info msg="cleaning up dead shim" Feb 12 19:22:19.313058 env[1382]: time="2024-02-12T19:22:19.313018177Z" level=warning msg="cleanup warnings time=\"2024-02-12T19:22:19Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=5504 runtime=io.containerd.runc.v2\n" Feb 12 19:22:19.404341 kubelet[2501]: I0212 19:22:19.404313 2501 scope.go:115] "RemoveContainer" containerID="8f04cac450aa8e5d51b207a24f171382f89e28ea5c03a25f20437a757cf46a2f" Feb 12 19:22:19.407921 env[1382]: time="2024-02-12T19:22:19.407885187Z" level=info msg="CreateContainer within sandbox \"6e9ff90c30b76babbfe2bdbe65804b2ce2786d26c87784055e7a3009efc02c42\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Feb 12 19:22:19.441235 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1894661385.mount: Deactivated successfully. Feb 12 19:22:19.458326 env[1382]: time="2024-02-12T19:22:19.458277155Z" level=info msg="CreateContainer within sandbox \"6e9ff90c30b76babbfe2bdbe65804b2ce2786d26c87784055e7a3009efc02c42\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"5c6903480a60fafa8d42e371b9dd453d9713ef88e76afdd97ae4be36f2f7a47e\"" Feb 12 19:22:19.459015 env[1382]: time="2024-02-12T19:22:19.458993565Z" level=info msg="StartContainer for \"5c6903480a60fafa8d42e371b9dd453d9713ef88e76afdd97ae4be36f2f7a47e\"" Feb 12 19:22:19.472224 systemd[1]: Started cri-containerd-5c6903480a60fafa8d42e371b9dd453d9713ef88e76afdd97ae4be36f2f7a47e.scope. Feb 12 19:22:19.510074 env[1382]: time="2024-02-12T19:22:19.509934021Z" level=info msg="StartContainer for \"5c6903480a60fafa8d42e371b9dd453d9713ef88e76afdd97ae4be36f2f7a47e\" returns successfully" Feb 12 19:22:22.328903 systemd[1]: cri-containerd-a2dcc0e4ed2a6b0662dc6c8042eb48ed4b604abcc9cb3df16473aab3fba240c2.scope: Deactivated successfully. Feb 12 19:22:22.329224 systemd[1]: cri-containerd-a2dcc0e4ed2a6b0662dc6c8042eb48ed4b604abcc9cb3df16473aab3fba240c2.scope: Consumed 3.134s CPU time. Feb 12 19:22:22.346899 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a2dcc0e4ed2a6b0662dc6c8042eb48ed4b604abcc9cb3df16473aab3fba240c2-rootfs.mount: Deactivated successfully. Feb 12 19:22:22.391317 env[1382]: time="2024-02-12T19:22:22.391263353Z" level=info msg="shim disconnected" id=a2dcc0e4ed2a6b0662dc6c8042eb48ed4b604abcc9cb3df16473aab3fba240c2 Feb 12 19:22:22.391772 env[1382]: time="2024-02-12T19:22:22.391752200Z" level=warning msg="cleaning up after shim disconnected" id=a2dcc0e4ed2a6b0662dc6c8042eb48ed4b604abcc9cb3df16473aab3fba240c2 namespace=k8s.io Feb 12 19:22:22.391858 env[1382]: time="2024-02-12T19:22:22.391845721Z" level=info msg="cleaning up dead shim" Feb 12 19:22:22.399572 env[1382]: time="2024-02-12T19:22:22.399527431Z" level=warning msg="cleanup warnings time=\"2024-02-12T19:22:22Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=5561 runtime=io.containerd.runc.v2\n" Feb 12 19:22:22.411833 kubelet[2501]: I0212 19:22:22.411799 2501 scope.go:115] "RemoveContainer" containerID="a2dcc0e4ed2a6b0662dc6c8042eb48ed4b604abcc9cb3df16473aab3fba240c2" Feb 12 19:22:22.415736 env[1382]: time="2024-02-12T19:22:22.415675382Z" level=info msg="CreateContainer within sandbox \"0ab2ec75e74be1443852d7c3e36ff3359cf64501373c969bf2a53d12cefc5958\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Feb 12 19:22:22.450216 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount343523517.mount: Deactivated successfully. Feb 12 19:22:22.455568 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3583729112.mount: Deactivated successfully. Feb 12 19:22:22.471649 env[1382]: time="2024-02-12T19:22:22.471595501Z" level=info msg="CreateContainer within sandbox \"0ab2ec75e74be1443852d7c3e36ff3359cf64501373c969bf2a53d12cefc5958\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"2494e4b2b4cf8b91b301ed39d61b0e97bf94adb55145456acd9d41205ee34148\"" Feb 12 19:22:22.472181 env[1382]: time="2024-02-12T19:22:22.472151629Z" level=info msg="StartContainer for \"2494e4b2b4cf8b91b301ed39d61b0e97bf94adb55145456acd9d41205ee34148\"" Feb 12 19:22:22.487738 systemd[1]: Started cri-containerd-2494e4b2b4cf8b91b301ed39d61b0e97bf94adb55145456acd9d41205ee34148.scope. Feb 12 19:22:22.526620 env[1382]: time="2024-02-12T19:22:22.526566527Z" level=info msg="StartContainer for \"2494e4b2b4cf8b91b301ed39d61b0e97bf94adb55145456acd9d41205ee34148\" returns successfully" Feb 12 19:22:23.510014 kubelet[2501]: E0212 19:22:23.509897 2501 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kube-apiserver-ci-3510.3.2-a-e08ac1c56f.17b333e2d70a1fc5", GenerateName:"", Namespace:"kube-system", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"kube-system", Name:"kube-apiserver-ci-3510.3.2-a-e08ac1c56f", UID:"d3f122774b25aa8272b57c0e8f8d0800", APIVersion:"v1", ResourceVersion:"", FieldPath:"spec.containers{kube-apiserver}"}, Reason:"Unhealthy", Message:"Readiness probe failed: HTTP probe failed with statuscode: 500", Source:v1.EventSource{Component:"kubelet", Host:"ci-3510.3.2-a-e08ac1c56f"}, FirstTimestamp:time.Date(2024, time.February, 12, 19, 22, 13, 66940357, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 19, 22, 13, 66940357, time.Local), Count:1, Type:"Warning", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'rpc error: code = Unavailable desc = error reading from server: read tcp 10.200.20.31:46150->10.200.20.14:2379: read: connection timed out' (will not retry!) Feb 12 19:22:24.597962 kubelet[2501]: E0212 19:22:24.597814 2501 controller.go:189] failed to update lease, error: rpc error: code = Unavailable desc = error reading from server: read tcp 10.200.20.31:46382->10.200.20.14:2379: read: connection timed out Feb 12 19:22:29.887678 env[1382]: time="2024-02-12T19:22:29.887447628Z" level=info msg="StopPodSandbox for \"0e18c8286a555e33d3bc14da3455609ded3bea1f2a25ec3a59cf48dd339a128c\"" Feb 12 19:22:29.887678 env[1382]: time="2024-02-12T19:22:29.887566110Z" level=info msg="TearDown network for sandbox \"0e18c8286a555e33d3bc14da3455609ded3bea1f2a25ec3a59cf48dd339a128c\" successfully" Feb 12 19:22:29.887678 env[1382]: time="2024-02-12T19:22:29.887609911Z" level=info msg="StopPodSandbox for \"0e18c8286a555e33d3bc14da3455609ded3bea1f2a25ec3a59cf48dd339a128c\" returns successfully" Feb 12 19:22:29.888085 env[1382]: time="2024-02-12T19:22:29.887924035Z" level=info msg="RemovePodSandbox for \"0e18c8286a555e33d3bc14da3455609ded3bea1f2a25ec3a59cf48dd339a128c\"" Feb 12 19:22:29.888085 env[1382]: time="2024-02-12T19:22:29.887951035Z" level=info msg="Forcibly stopping sandbox \"0e18c8286a555e33d3bc14da3455609ded3bea1f2a25ec3a59cf48dd339a128c\"" Feb 12 19:22:29.888085 env[1382]: time="2024-02-12T19:22:29.888012356Z" level=info msg="TearDown network for sandbox \"0e18c8286a555e33d3bc14da3455609ded3bea1f2a25ec3a59cf48dd339a128c\" successfully" Feb 12 19:22:29.896394 env[1382]: time="2024-02-12T19:22:29.896346113Z" level=info msg="RemovePodSandbox \"0e18c8286a555e33d3bc14da3455609ded3bea1f2a25ec3a59cf48dd339a128c\" returns successfully" Feb 12 19:22:29.897006 env[1382]: time="2024-02-12T19:22:29.896818799Z" level=info msg="StopPodSandbox for \"9dd60c2db8aa91a832cff830b36202bd6ea9c3fef0900ca8494eb6f30658ee1b\"" Feb 12 19:22:29.897006 env[1382]: time="2024-02-12T19:22:29.896907881Z" level=info msg="TearDown network for sandbox \"9dd60c2db8aa91a832cff830b36202bd6ea9c3fef0900ca8494eb6f30658ee1b\" successfully" Feb 12 19:22:29.897006 env[1382]: time="2024-02-12T19:22:29.896938361Z" level=info msg="StopPodSandbox for \"9dd60c2db8aa91a832cff830b36202bd6ea9c3fef0900ca8494eb6f30658ee1b\" returns successfully" Feb 12 19:22:29.898343 env[1382]: time="2024-02-12T19:22:29.897247205Z" level=info msg="RemovePodSandbox for \"9dd60c2db8aa91a832cff830b36202bd6ea9c3fef0900ca8494eb6f30658ee1b\"" Feb 12 19:22:29.898343 env[1382]: time="2024-02-12T19:22:29.897274286Z" level=info msg="Forcibly stopping sandbox \"9dd60c2db8aa91a832cff830b36202bd6ea9c3fef0900ca8494eb6f30658ee1b\"" Feb 12 19:22:29.898343 env[1382]: time="2024-02-12T19:22:29.897331567Z" level=info msg="TearDown network for sandbox \"9dd60c2db8aa91a832cff830b36202bd6ea9c3fef0900ca8494eb6f30658ee1b\" successfully" Feb 12 19:22:29.905582 env[1382]: time="2024-02-12T19:22:29.905546161Z" level=info msg="RemovePodSandbox \"9dd60c2db8aa91a832cff830b36202bd6ea9c3fef0900ca8494eb6f30658ee1b\" returns successfully" Feb 12 19:22:29.906049 env[1382]: time="2024-02-12T19:22:29.906028608Z" level=info msg="StopPodSandbox for \"e93b5e626098f57bfe9c03a0798728af6655160c5539164774fedf5a44e5ff1b\"" Feb 12 19:22:29.906221 env[1382]: time="2024-02-12T19:22:29.906180930Z" level=info msg="TearDown network for sandbox \"e93b5e626098f57bfe9c03a0798728af6655160c5539164774fedf5a44e5ff1b\" successfully" Feb 12 19:22:29.906314 env[1382]: time="2024-02-12T19:22:29.906297812Z" level=info msg="StopPodSandbox for \"e93b5e626098f57bfe9c03a0798728af6655160c5539164774fedf5a44e5ff1b\" returns successfully" Feb 12 19:22:29.906610 env[1382]: time="2024-02-12T19:22:29.906584256Z" level=info msg="RemovePodSandbox for \"e93b5e626098f57bfe9c03a0798728af6655160c5539164774fedf5a44e5ff1b\"" Feb 12 19:22:29.906676 env[1382]: time="2024-02-12T19:22:29.906614816Z" level=info msg="Forcibly stopping sandbox \"e93b5e626098f57bfe9c03a0798728af6655160c5539164774fedf5a44e5ff1b\"" Feb 12 19:22:29.906707 env[1382]: time="2024-02-12T19:22:29.906675777Z" level=info msg="TearDown network for sandbox \"e93b5e626098f57bfe9c03a0798728af6655160c5539164774fedf5a44e5ff1b\" successfully" Feb 12 19:22:29.917308 env[1382]: time="2024-02-12T19:22:29.917264565Z" level=info msg="RemovePodSandbox \"e93b5e626098f57bfe9c03a0798728af6655160c5539164774fedf5a44e5ff1b\" returns successfully" Feb 12 19:22:29.919596 kubelet[2501]: W0212 19:22:29.919565 2501 machine.go:65] Cannot read vendor id correctly, set empty. Feb 12 19:22:34.598915 kubelet[2501]: E0212 19:22:34.598869 2501 request.go:1075] Unexpected error when reading response body: net/http: request canceled (Client.Timeout or context cancellation while reading body) Feb 12 19:22:34.599268 kubelet[2501]: E0212 19:22:34.598938 2501 controller.go:189] failed to update lease, error: unexpected error when reading response body. Please retry. Original error: net/http: request canceled (Client.Timeout or context cancellation while reading body) Feb 12 19:22:37.994901 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#262 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:22:37.995238 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#264 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:22:38.004342 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#263 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:22:38.013914 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#265 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:22:38.023553 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#266 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:22:38.033126 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#261 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:22:38.044521 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#259 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:22:38.055168 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#260 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:22:38.083095 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#260 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:22:38.083389 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#259 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:22:38.093889 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#261 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:22:38.103693 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#266 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:22:38.112947 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#265 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:22:38.122558 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#263 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:22:38.131759 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#264 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:22:38.142264 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#262 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:22:38.169489 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#262 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:22:38.169739 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#264 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:22:38.178700 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#263 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:22:38.189206 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#265 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:22:38.199287 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#266 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:22:38.208851 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#261 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:22:38.219221 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#259 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:22:38.230008 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#260 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:22:38.257317 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#260 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:22:38.257644 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#259 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001