Feb 9 09:56:13.072758 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Feb 9 09:56:13.072778 kernel: Linux version 5.15.148-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP PREEMPT Fri Feb 9 08:56:26 -00 2024 Feb 9 09:56:13.072786 kernel: earlycon: pl11 at MMIO 0x00000000effec000 (options '') Feb 9 09:56:13.072793 kernel: printk: bootconsole [pl11] enabled Feb 9 09:56:13.072798 kernel: efi: EFI v2.70 by EDK II Feb 9 09:56:13.072803 kernel: efi: ACPI 2.0=0x3fd89018 SMBIOS=0x3fd66000 SMBIOS 3.0=0x3fd64000 MEMATTR=0x3ef3e198 RNG=0x3fd89998 MEMRESERVE=0x37e73f98 Feb 9 09:56:13.072810 kernel: random: crng init done Feb 9 09:56:13.072815 kernel: ACPI: Early table checksum verification disabled Feb 9 09:56:13.072821 kernel: ACPI: RSDP 0x000000003FD89018 000024 (v02 VRTUAL) Feb 9 09:56:13.072826 kernel: ACPI: XSDT 0x000000003FD89F18 00006C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 9 09:56:13.072831 kernel: ACPI: FACP 0x000000003FD89C18 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 9 09:56:13.072838 kernel: ACPI: DSDT 0x000000003EBD2018 01DEC0 (v02 MSFTVM DSDT01 00000001 MSFT 05000000) Feb 9 09:56:13.072844 kernel: ACPI: DBG2 0x000000003FD89B18 000072 (v00 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 9 09:56:13.072849 kernel: ACPI: GTDT 0x000000003FD89D98 000060 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 9 09:56:13.072856 kernel: ACPI: OEM0 0x000000003FD89098 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 9 09:56:13.072862 kernel: ACPI: SPCR 0x000000003FD89A98 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 9 09:56:13.072868 kernel: ACPI: APIC 0x000000003FD89818 0000FC (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 9 09:56:13.072875 kernel: ACPI: SRAT 0x000000003FD89198 000234 (v03 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 9 09:56:13.072881 kernel: ACPI: PPTT 0x000000003FD89418 000120 (v01 VRTUAL MICROSFT 00000000 MSFT 00000000) Feb 9 09:56:13.072887 kernel: ACPI: BGRT 0x000000003FD89E98 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 9 09:56:13.072893 kernel: ACPI: SPCR: console: pl011,mmio32,0xeffec000,115200 Feb 9 09:56:13.072898 kernel: NUMA: Failed to initialise from firmware Feb 9 09:56:13.072904 kernel: NUMA: Faking a node at [mem 0x0000000000000000-0x00000001bfffffff] Feb 9 09:56:13.072910 kernel: NUMA: NODE_DATA [mem 0x1bf7f2900-0x1bf7f7fff] Feb 9 09:56:13.072916 kernel: Zone ranges: Feb 9 09:56:13.072922 kernel: DMA [mem 0x0000000000000000-0x00000000ffffffff] Feb 9 09:56:13.072927 kernel: DMA32 empty Feb 9 09:56:13.072934 kernel: Normal [mem 0x0000000100000000-0x00000001bfffffff] Feb 9 09:56:13.072940 kernel: Movable zone start for each node Feb 9 09:56:13.072946 kernel: Early memory node ranges Feb 9 09:56:13.072951 kernel: node 0: [mem 0x0000000000000000-0x00000000007fffff] Feb 9 09:56:13.072957 kernel: node 0: [mem 0x0000000000824000-0x000000003ec80fff] Feb 9 09:56:13.072963 kernel: node 0: [mem 0x000000003ec81000-0x000000003eca9fff] Feb 9 09:56:13.072969 kernel: node 0: [mem 0x000000003ecaa000-0x000000003fd29fff] Feb 9 09:56:13.072975 kernel: node 0: [mem 0x000000003fd2a000-0x000000003fd7dfff] Feb 9 09:56:13.072980 kernel: node 0: [mem 0x000000003fd7e000-0x000000003fd89fff] Feb 9 09:56:13.072986 kernel: node 0: [mem 0x000000003fd8a000-0x000000003fd8dfff] Feb 9 09:56:13.072992 kernel: node 0: [mem 0x000000003fd8e000-0x000000003fffffff] Feb 9 09:56:13.072997 kernel: node 0: [mem 0x0000000100000000-0x00000001bfffffff] Feb 9 09:56:13.073004 kernel: Initmem setup node 0 [mem 0x0000000000000000-0x00000001bfffffff] Feb 9 09:56:13.073013 kernel: On node 0, zone DMA: 36 pages in unavailable ranges Feb 9 09:56:13.073019 kernel: psci: probing for conduit method from ACPI. Feb 9 09:56:13.073025 kernel: psci: PSCIv1.1 detected in firmware. Feb 9 09:56:13.073031 kernel: psci: Using standard PSCI v0.2 function IDs Feb 9 09:56:13.073038 kernel: psci: MIGRATE_INFO_TYPE not supported. Feb 9 09:56:13.073044 kernel: psci: SMC Calling Convention v1.4 Feb 9 09:56:13.073050 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x0 -> Node -1 Feb 9 09:56:13.073056 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1 -> Node -1 Feb 9 09:56:13.073062 kernel: percpu: Embedded 29 pages/cpu s79960 r8192 d30632 u118784 Feb 9 09:56:13.073069 kernel: pcpu-alloc: s79960 r8192 d30632 u118784 alloc=29*4096 Feb 9 09:56:13.073075 kernel: pcpu-alloc: [0] 0 [0] 1 Feb 9 09:56:13.073081 kernel: Detected PIPT I-cache on CPU0 Feb 9 09:56:13.073087 kernel: CPU features: detected: GIC system register CPU interface Feb 9 09:56:13.073093 kernel: CPU features: detected: Hardware dirty bit management Feb 9 09:56:13.073099 kernel: CPU features: detected: Spectre-BHB Feb 9 09:56:13.073105 kernel: CPU features: kernel page table isolation forced ON by KASLR Feb 9 09:56:13.073112 kernel: CPU features: detected: Kernel page table isolation (KPTI) Feb 9 09:56:13.073118 kernel: CPU features: detected: ARM erratum 1418040 Feb 9 09:56:13.073124 kernel: CPU features: detected: ARM erratum 1542419 (kernel portion) Feb 9 09:56:13.073130 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1032156 Feb 9 09:56:13.073136 kernel: Policy zone: Normal Feb 9 09:56:13.073144 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=14ffd9340f674a8d04c9d43eed85484d8b2b7e2bcd8b36a975c9ac66063d537d Feb 9 09:56:13.073150 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Feb 9 09:56:13.073157 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Feb 9 09:56:13.073163 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Feb 9 09:56:13.073169 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Feb 9 09:56:13.073176 kernel: software IO TLB: mapped [mem 0x000000003abd2000-0x000000003ebd2000] (64MB) Feb 9 09:56:13.073183 kernel: Memory: 3991936K/4194160K available (9792K kernel code, 2092K rwdata, 7556K rodata, 34688K init, 778K bss, 202224K reserved, 0K cma-reserved) Feb 9 09:56:13.073189 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Feb 9 09:56:13.073195 kernel: trace event string verifier disabled Feb 9 09:56:13.073201 kernel: rcu: Preemptible hierarchical RCU implementation. Feb 9 09:56:13.073207 kernel: rcu: RCU event tracing is enabled. Feb 9 09:56:13.073214 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Feb 9 09:56:13.073220 kernel: Trampoline variant of Tasks RCU enabled. Feb 9 09:56:13.073226 kernel: Tracing variant of Tasks RCU enabled. Feb 9 09:56:13.073233 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Feb 9 09:56:13.073239 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Feb 9 09:56:13.073246 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Feb 9 09:56:13.073252 kernel: GICv3: 960 SPIs implemented Feb 9 09:56:13.073258 kernel: GICv3: 0 Extended SPIs implemented Feb 9 09:56:13.073264 kernel: GICv3: Distributor has no Range Selector support Feb 9 09:56:13.073270 kernel: Root IRQ handler: gic_handle_irq Feb 9 09:56:13.073276 kernel: GICv3: 16 PPIs implemented Feb 9 09:56:13.073282 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000effee000 Feb 9 09:56:13.073288 kernel: ITS: No ITS available, not enabling LPIs Feb 9 09:56:13.073294 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 9 09:56:13.073300 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Feb 9 09:56:13.073307 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Feb 9 09:56:13.073313 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Feb 9 09:56:13.073321 kernel: Console: colour dummy device 80x25 Feb 9 09:56:13.073327 kernel: printk: console [tty1] enabled Feb 9 09:56:13.073333 kernel: ACPI: Core revision 20210730 Feb 9 09:56:13.073340 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Feb 9 09:56:13.073346 kernel: pid_max: default: 32768 minimum: 301 Feb 9 09:56:13.073353 kernel: LSM: Security Framework initializing Feb 9 09:56:13.073359 kernel: SELinux: Initializing. Feb 9 09:56:13.073365 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 9 09:56:13.073372 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 9 09:56:13.073380 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3a8030, hints 0xe, misc 0x31e1 Feb 9 09:56:13.073386 kernel: Hyper-V: Host Build 10.0.22477.1369-1-0 Feb 9 09:56:13.073392 kernel: rcu: Hierarchical SRCU implementation. Feb 9 09:56:13.073398 kernel: Remapping and enabling EFI services. Feb 9 09:56:13.073404 kernel: smp: Bringing up secondary CPUs ... Feb 9 09:56:13.073411 kernel: Detected PIPT I-cache on CPU1 Feb 9 09:56:13.073417 kernel: GICv3: CPU1: found redistributor 1 region 1:0x00000000f000e000 Feb 9 09:56:13.073423 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 9 09:56:13.073429 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Feb 9 09:56:13.073437 kernel: smp: Brought up 1 node, 2 CPUs Feb 9 09:56:13.073443 kernel: SMP: Total of 2 processors activated. Feb 9 09:56:13.073449 kernel: CPU features: detected: 32-bit EL0 Support Feb 9 09:56:13.073456 kernel: CPU features: detected: Instruction cache invalidation not required for I/D coherence Feb 9 09:56:13.073462 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Feb 9 09:56:13.073469 kernel: CPU features: detected: CRC32 instructions Feb 9 09:56:13.073475 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Feb 9 09:56:13.073481 kernel: CPU features: detected: LSE atomic instructions Feb 9 09:56:13.073487 kernel: CPU features: detected: Privileged Access Never Feb 9 09:56:13.073495 kernel: CPU: All CPU(s) started at EL1 Feb 9 09:56:13.073501 kernel: alternatives: patching kernel code Feb 9 09:56:13.073512 kernel: devtmpfs: initialized Feb 9 09:56:13.073531 kernel: KASLR enabled Feb 9 09:56:13.073538 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Feb 9 09:56:13.073545 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Feb 9 09:56:13.073552 kernel: pinctrl core: initialized pinctrl subsystem Feb 9 09:56:13.073558 kernel: SMBIOS 3.1.0 present. Feb 9 09:56:13.073565 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 07/12/2023 Feb 9 09:56:13.073572 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Feb 9 09:56:13.073581 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Feb 9 09:56:13.074131 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Feb 9 09:56:13.074142 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Feb 9 09:56:13.074149 kernel: audit: initializing netlink subsys (disabled) Feb 9 09:56:13.074156 kernel: audit: type=2000 audit(0.088:1): state=initialized audit_enabled=0 res=1 Feb 9 09:56:13.074163 kernel: thermal_sys: Registered thermal governor 'step_wise' Feb 9 09:56:13.074170 kernel: cpuidle: using governor menu Feb 9 09:56:13.074180 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Feb 9 09:56:13.074187 kernel: ASID allocator initialised with 32768 entries Feb 9 09:56:13.074194 kernel: ACPI: bus type PCI registered Feb 9 09:56:13.074200 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Feb 9 09:56:13.074207 kernel: Serial: AMBA PL011 UART driver Feb 9 09:56:13.074214 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Feb 9 09:56:13.074221 kernel: HugeTLB registered 32.0 MiB page size, pre-allocated 0 pages Feb 9 09:56:13.074227 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Feb 9 09:56:13.074234 kernel: HugeTLB registered 64.0 KiB page size, pre-allocated 0 pages Feb 9 09:56:13.074242 kernel: cryptd: max_cpu_qlen set to 1000 Feb 9 09:56:13.074249 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Feb 9 09:56:13.074255 kernel: ACPI: Added _OSI(Module Device) Feb 9 09:56:13.074262 kernel: ACPI: Added _OSI(Processor Device) Feb 9 09:56:13.074269 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Feb 9 09:56:13.074275 kernel: ACPI: Added _OSI(Processor Aggregator Device) Feb 9 09:56:13.074282 kernel: ACPI: Added _OSI(Linux-Dell-Video) Feb 9 09:56:13.074288 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Feb 9 09:56:13.074295 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Feb 9 09:56:13.074303 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Feb 9 09:56:13.074310 kernel: ACPI: Interpreter enabled Feb 9 09:56:13.074317 kernel: ACPI: Using GIC for interrupt routing Feb 9 09:56:13.074323 kernel: ARMH0011:00: ttyAMA0 at MMIO 0xeffec000 (irq = 12, base_baud = 0) is a SBSA Feb 9 09:56:13.074330 kernel: printk: console [ttyAMA0] enabled Feb 9 09:56:13.074336 kernel: printk: bootconsole [pl11] disabled Feb 9 09:56:13.074343 kernel: ARMH0011:01: ttyAMA1 at MMIO 0xeffeb000 (irq = 13, base_baud = 0) is a SBSA Feb 9 09:56:13.074350 kernel: iommu: Default domain type: Translated Feb 9 09:56:13.074356 kernel: iommu: DMA domain TLB invalidation policy: strict mode Feb 9 09:56:13.074364 kernel: vgaarb: loaded Feb 9 09:56:13.074371 kernel: pps_core: LinuxPPS API ver. 1 registered Feb 9 09:56:13.074378 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Feb 9 09:56:13.074385 kernel: PTP clock support registered Feb 9 09:56:13.074391 kernel: Registered efivars operations Feb 9 09:56:13.074397 kernel: No ACPI PMU IRQ for CPU0 Feb 9 09:56:13.074404 kernel: No ACPI PMU IRQ for CPU1 Feb 9 09:56:13.074411 kernel: clocksource: Switched to clocksource arch_sys_counter Feb 9 09:56:13.074417 kernel: VFS: Disk quotas dquot_6.6.0 Feb 9 09:56:13.074425 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Feb 9 09:56:13.074432 kernel: pnp: PnP ACPI init Feb 9 09:56:13.074438 kernel: pnp: PnP ACPI: found 0 devices Feb 9 09:56:13.074445 kernel: NET: Registered PF_INET protocol family Feb 9 09:56:13.074452 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Feb 9 09:56:13.074459 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Feb 9 09:56:13.074465 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Feb 9 09:56:13.074472 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Feb 9 09:56:13.074479 kernel: TCP bind hash table entries: 32768 (order: 7, 524288 bytes, linear) Feb 9 09:56:13.074487 kernel: TCP: Hash tables configured (established 32768 bind 32768) Feb 9 09:56:13.074494 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 9 09:56:13.074501 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 9 09:56:13.074507 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Feb 9 09:56:13.074514 kernel: PCI: CLS 0 bytes, default 64 Feb 9 09:56:13.074538 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 1 counters available Feb 9 09:56:13.074546 kernel: kvm [1]: HYP mode not available Feb 9 09:56:13.074552 kernel: Initialise system trusted keyrings Feb 9 09:56:13.074559 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Feb 9 09:56:13.074567 kernel: Key type asymmetric registered Feb 9 09:56:13.074574 kernel: Asymmetric key parser 'x509' registered Feb 9 09:56:13.074581 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Feb 9 09:56:13.074587 kernel: io scheduler mq-deadline registered Feb 9 09:56:13.074594 kernel: io scheduler kyber registered Feb 9 09:56:13.074600 kernel: io scheduler bfq registered Feb 9 09:56:13.074607 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Feb 9 09:56:13.074613 kernel: thunder_xcv, ver 1.0 Feb 9 09:56:13.074620 kernel: thunder_bgx, ver 1.0 Feb 9 09:56:13.074628 kernel: nicpf, ver 1.0 Feb 9 09:56:13.074634 kernel: nicvf, ver 1.0 Feb 9 09:56:13.074770 kernel: rtc-efi rtc-efi.0: registered as rtc0 Feb 9 09:56:13.074833 kernel: rtc-efi rtc-efi.0: setting system clock to 2024-02-09T09:56:12 UTC (1707472572) Feb 9 09:56:13.074843 kernel: efifb: probing for efifb Feb 9 09:56:13.074849 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Feb 9 09:56:13.074856 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Feb 9 09:56:13.074863 kernel: efifb: scrolling: redraw Feb 9 09:56:13.074872 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Feb 9 09:56:13.074879 kernel: Console: switching to colour frame buffer device 128x48 Feb 9 09:56:13.074885 kernel: fb0: EFI VGA frame buffer device Feb 9 09:56:13.074892 kernel: SMCCC: SOC_ID: ARCH_SOC_ID not implemented, skipping .... Feb 9 09:56:13.074899 kernel: hid: raw HID events driver (C) Jiri Kosina Feb 9 09:56:13.074906 kernel: NET: Registered PF_INET6 protocol family Feb 9 09:56:13.074912 kernel: Segment Routing with IPv6 Feb 9 09:56:13.074919 kernel: In-situ OAM (IOAM) with IPv6 Feb 9 09:56:13.074926 kernel: NET: Registered PF_PACKET protocol family Feb 9 09:56:13.074934 kernel: Key type dns_resolver registered Feb 9 09:56:13.074940 kernel: registered taskstats version 1 Feb 9 09:56:13.074947 kernel: Loading compiled-in X.509 certificates Feb 9 09:56:13.074954 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.148-flatcar: ca91574208414224935c9cea513398977daf917d' Feb 9 09:56:13.074960 kernel: Key type .fscrypt registered Feb 9 09:56:13.074967 kernel: Key type fscrypt-provisioning registered Feb 9 09:56:13.074973 kernel: ima: No TPM chip found, activating TPM-bypass! Feb 9 09:56:13.074980 kernel: ima: Allocated hash algorithm: sha1 Feb 9 09:56:13.074987 kernel: ima: No architecture policies found Feb 9 09:56:13.074995 kernel: Freeing unused kernel memory: 34688K Feb 9 09:56:13.075001 kernel: Run /init as init process Feb 9 09:56:13.075008 kernel: with arguments: Feb 9 09:56:13.075015 kernel: /init Feb 9 09:56:13.075021 kernel: with environment: Feb 9 09:56:13.075028 kernel: HOME=/ Feb 9 09:56:13.075034 kernel: TERM=linux Feb 9 09:56:13.075041 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Feb 9 09:56:13.075050 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Feb 9 09:56:13.075060 systemd[1]: Detected virtualization microsoft. Feb 9 09:56:13.075068 systemd[1]: Detected architecture arm64. Feb 9 09:56:13.075075 systemd[1]: Running in initrd. Feb 9 09:56:13.075082 systemd[1]: No hostname configured, using default hostname. Feb 9 09:56:13.075089 systemd[1]: Hostname set to . Feb 9 09:56:13.075096 systemd[1]: Initializing machine ID from random generator. Feb 9 09:56:13.075103 systemd[1]: Queued start job for default target initrd.target. Feb 9 09:56:13.075111 systemd[1]: Started systemd-ask-password-console.path. Feb 9 09:56:13.075119 systemd[1]: Reached target cryptsetup.target. Feb 9 09:56:13.075125 systemd[1]: Reached target paths.target. Feb 9 09:56:13.075132 systemd[1]: Reached target slices.target. Feb 9 09:56:13.075139 systemd[1]: Reached target swap.target. Feb 9 09:56:13.075146 systemd[1]: Reached target timers.target. Feb 9 09:56:13.075154 systemd[1]: Listening on iscsid.socket. Feb 9 09:56:13.075161 systemd[1]: Listening on iscsiuio.socket. Feb 9 09:56:13.075169 systemd[1]: Listening on systemd-journald-audit.socket. Feb 9 09:56:13.075176 systemd[1]: Listening on systemd-journald-dev-log.socket. Feb 9 09:56:13.075183 systemd[1]: Listening on systemd-journald.socket. Feb 9 09:56:13.075190 systemd[1]: Listening on systemd-networkd.socket. Feb 9 09:56:13.075197 systemd[1]: Listening on systemd-udevd-control.socket. Feb 9 09:56:13.075205 systemd[1]: Listening on systemd-udevd-kernel.socket. Feb 9 09:56:13.075212 systemd[1]: Reached target sockets.target. Feb 9 09:56:13.075219 systemd[1]: Starting kmod-static-nodes.service... Feb 9 09:56:13.075226 systemd[1]: Finished network-cleanup.service. Feb 9 09:56:13.075234 systemd[1]: Starting systemd-fsck-usr.service... Feb 9 09:56:13.075241 systemd[1]: Starting systemd-journald.service... Feb 9 09:56:13.075249 systemd[1]: Starting systemd-modules-load.service... Feb 9 09:56:13.075256 systemd[1]: Starting systemd-resolved.service... Feb 9 09:56:13.075263 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Feb 9 09:56:13.075270 systemd[1]: Starting systemd-vconsole-setup.service... Feb 9 09:56:13.075281 systemd-journald[276]: Journal started Feb 9 09:56:13.075321 systemd-journald[276]: Runtime Journal (/run/log/journal/59c85f70913f42b19a04ad4606ee53ff) is 8.0M, max 78.6M, 70.6M free. Feb 9 09:56:13.038490 systemd-modules-load[277]: Inserted module 'overlay' Feb 9 09:56:13.087313 systemd[1]: Started systemd-journald.service. Feb 9 09:56:13.094229 systemd-modules-load[277]: Inserted module 'br_netfilter' Feb 9 09:56:13.099372 kernel: Bridge firewalling registered Feb 9 09:56:13.103211 systemd-resolved[278]: Positive Trust Anchors: Feb 9 09:56:13.103228 systemd-resolved[278]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 9 09:56:13.141695 kernel: audit: type=1130 audit(1707472573.112:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:56:13.141719 kernel: SCSI subsystem initialized Feb 9 09:56:13.112000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:56:13.103255 systemd-resolved[278]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Feb 9 09:56:13.208709 kernel: audit: type=1130 audit(1707472573.146:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:56:13.208734 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Feb 9 09:56:13.146000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:56:13.105330 systemd-resolved[278]: Defaulting to hostname 'linux'. Feb 9 09:56:13.248073 kernel: device-mapper: uevent: version 1.0.3 Feb 9 09:56:13.248095 kernel: audit: type=1130 audit(1707472573.213:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:56:13.248105 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Feb 9 09:56:13.213000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:56:13.137335 systemd[1]: Started systemd-resolved.service. Feb 9 09:56:13.275185 kernel: audit: type=1130 audit(1707472573.252:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:56:13.252000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:56:13.146462 systemd[1]: Finished kmod-static-nodes.service. Feb 9 09:56:13.236863 systemd[1]: Finished systemd-fsck-usr.service. Feb 9 09:56:13.310473 kernel: audit: type=1130 audit(1707472573.280:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:56:13.280000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:56:13.253023 systemd[1]: Finished systemd-vconsole-setup.service. Feb 9 09:56:13.280949 systemd[1]: Reached target nss-lookup.target. Feb 9 09:56:13.290540 systemd-modules-load[277]: Inserted module 'dm_multipath' Feb 9 09:56:13.317275 systemd[1]: Starting dracut-cmdline-ask.service... Feb 9 09:56:13.373629 kernel: audit: type=1130 audit(1707472573.352:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:56:13.352000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:56:13.328357 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Feb 9 09:56:13.381000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:56:13.346755 systemd[1]: Finished systemd-modules-load.service. Feb 9 09:56:13.417475 kernel: audit: type=1130 audit(1707472573.381:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:56:13.371735 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Feb 9 09:56:13.446185 kernel: audit: type=1130 audit(1707472573.422:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:56:13.422000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:56:13.404904 systemd[1]: Starting systemd-sysctl.service... Feb 9 09:56:13.417639 systemd[1]: Finished dracut-cmdline-ask.service. Feb 9 09:56:13.464000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:56:13.483746 dracut-cmdline[298]: dracut-dracut-053 Feb 9 09:56:13.483746 dracut-cmdline[298]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=t Feb 9 09:56:13.483746 dracut-cmdline[298]: tyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=14ffd9340f674a8d04c9d43eed85484d8b2b7e2bcd8b36a975c9ac66063d537d Feb 9 09:56:13.531639 kernel: audit: type=1130 audit(1707472573.464:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:56:13.531664 kernel: Loading iSCSI transport class v2.0-870. Feb 9 09:56:13.424250 systemd[1]: Starting dracut-cmdline.service... Feb 9 09:56:13.455428 systemd[1]: Finished systemd-sysctl.service. Feb 9 09:56:13.555536 kernel: iscsi: registered transport (tcp) Feb 9 09:56:13.570941 kernel: iscsi: registered transport (qla4xxx) Feb 9 09:56:13.570954 kernel: QLogic iSCSI HBA Driver Feb 9 09:56:13.606589 systemd[1]: Finished dracut-cmdline.service. Feb 9 09:56:13.611000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:56:13.612363 systemd[1]: Starting dracut-pre-udev.service... Feb 9 09:56:13.666541 kernel: raid6: neonx8 gen() 13813 MB/s Feb 9 09:56:13.685531 kernel: raid6: neonx8 xor() 10820 MB/s Feb 9 09:56:13.709531 kernel: raid6: neonx4 gen() 13579 MB/s Feb 9 09:56:13.729529 kernel: raid6: neonx4 xor() 11252 MB/s Feb 9 09:56:13.750530 kernel: raid6: neonx2 gen() 13110 MB/s Feb 9 09:56:13.772530 kernel: raid6: neonx2 xor() 10354 MB/s Feb 9 09:56:13.793529 kernel: raid6: neonx1 gen() 10502 MB/s Feb 9 09:56:13.813529 kernel: raid6: neonx1 xor() 8773 MB/s Feb 9 09:56:13.835530 kernel: raid6: int64x8 gen() 6297 MB/s Feb 9 09:56:13.856529 kernel: raid6: int64x8 xor() 3550 MB/s Feb 9 09:56:13.876529 kernel: raid6: int64x4 gen() 7284 MB/s Feb 9 09:56:13.898530 kernel: raid6: int64x4 xor() 3854 MB/s Feb 9 09:56:13.918528 kernel: raid6: int64x2 gen() 6158 MB/s Feb 9 09:56:13.939529 kernel: raid6: int64x2 xor() 3322 MB/s Feb 9 09:56:13.961529 kernel: raid6: int64x1 gen() 5043 MB/s Feb 9 09:56:13.986458 kernel: raid6: int64x1 xor() 2647 MB/s Feb 9 09:56:13.986467 kernel: raid6: using algorithm neonx8 gen() 13813 MB/s Feb 9 09:56:13.986475 kernel: raid6: .... xor() 10820 MB/s, rmw enabled Feb 9 09:56:13.990955 kernel: raid6: using neon recovery algorithm Feb 9 09:56:14.009532 kernel: xor: measuring software checksum speed Feb 9 09:56:14.017806 kernel: 8regs : 17275 MB/sec Feb 9 09:56:14.017816 kernel: 32regs : 20739 MB/sec Feb 9 09:56:14.021842 kernel: arm64_neon : 27731 MB/sec Feb 9 09:56:14.027627 kernel: xor: using function: arm64_neon (27731 MB/sec) Feb 9 09:56:14.083535 kernel: Btrfs loaded, crc32c=crc32c-generic, zoned=no, fsverity=no Feb 9 09:56:14.093293 systemd[1]: Finished dracut-pre-udev.service. Feb 9 09:56:14.097000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:56:14.101000 audit: BPF prog-id=7 op=LOAD Feb 9 09:56:14.101000 audit: BPF prog-id=8 op=LOAD Feb 9 09:56:14.102654 systemd[1]: Starting systemd-udevd.service... Feb 9 09:56:14.120785 systemd-udevd[474]: Using default interface naming scheme 'v252'. Feb 9 09:56:14.127699 systemd[1]: Started systemd-udevd.service. Feb 9 09:56:14.133000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:56:14.139088 systemd[1]: Starting dracut-pre-trigger.service... Feb 9 09:56:14.155419 dracut-pre-trigger[495]: rd.md=0: removing MD RAID activation Feb 9 09:56:14.187801 systemd[1]: Finished dracut-pre-trigger.service. Feb 9 09:56:14.192000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:56:14.193612 systemd[1]: Starting systemd-udev-trigger.service... Feb 9 09:56:14.228751 systemd[1]: Finished systemd-udev-trigger.service. Feb 9 09:56:14.241000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:56:14.284547 kernel: hv_vmbus: Vmbus version:5.3 Feb 9 09:56:14.296547 kernel: hv_vmbus: registering driver hyperv_keyboard Feb 9 09:56:14.315547 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/VMBUS:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 Feb 9 09:56:14.315595 kernel: hv_vmbus: registering driver hid_hyperv Feb 9 09:56:14.342913 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 Feb 9 09:56:14.342967 kernel: hv_vmbus: registering driver hv_netvsc Feb 9 09:56:14.342977 kernel: hid-generic 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Feb 9 09:56:14.357861 kernel: hv_vmbus: registering driver hv_storvsc Feb 9 09:56:14.357914 kernel: scsi host1: storvsc_host_t Feb 9 09:56:14.361851 kernel: scsi host0: storvsc_host_t Feb 9 09:56:14.362537 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Feb 9 09:56:14.378533 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 0 Feb 9 09:56:14.397100 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Feb 9 09:56:14.397313 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Feb 9 09:56:14.404537 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Feb 9 09:56:14.404698 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Feb 9 09:56:14.404789 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Feb 9 09:56:14.413555 kernel: sd 0:0:0:0: [sda] Write Protect is off Feb 9 09:56:14.413724 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Feb 9 09:56:14.422936 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Feb 9 09:56:14.423069 kernel: hv_netvsc 002248bb-d916-0022-48bb-d916002248bb eth0: VF slot 1 added Feb 9 09:56:14.435881 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 9 09:56:14.444548 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Feb 9 09:56:14.459648 kernel: hv_vmbus: registering driver hv_pci Feb 9 09:56:14.459703 kernel: hv_pci 43707c1f-c787-4021-be62-23b9aee4ca76: PCI VMBus probing: Using version 0x10004 Feb 9 09:56:14.478470 kernel: hv_pci 43707c1f-c787-4021-be62-23b9aee4ca76: PCI host bridge to bus c787:00 Feb 9 09:56:14.478659 kernel: pci_bus c787:00: root bus resource [mem 0xfc0000000-0xfc00fffff window] Feb 9 09:56:14.478756 kernel: pci_bus c787:00: No busn resource found for root bus, will use [bus 00-ff] Feb 9 09:56:14.493664 kernel: pci c787:00:02.0: [15b3:1018] type 00 class 0x020000 Feb 9 09:56:14.506725 kernel: pci c787:00:02.0: reg 0x10: [mem 0xfc0000000-0xfc00fffff 64bit pref] Feb 9 09:56:14.530633 kernel: pci c787:00:02.0: enabling Extended Tags Feb 9 09:56:14.560800 kernel: pci c787:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at c787:00:02.0 (capable of 126.016 Gb/s with 8.0 GT/s PCIe x16 link) Feb 9 09:56:14.561023 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by (udev-worker) (541) Feb 9 09:56:14.561035 kernel: pci_bus c787:00: busn_res: [bus 00-ff] end is updated to 00 Feb 9 09:56:14.573510 kernel: pci c787:00:02.0: BAR 0: assigned [mem 0xfc0000000-0xfc00fffff 64bit pref] Feb 9 09:56:14.578891 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Feb 9 09:56:14.599241 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Feb 9 09:56:14.633551 kernel: mlx5_core c787:00:02.0: firmware version: 16.30.1284 Feb 9 09:56:14.648466 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Feb 9 09:56:14.668362 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Feb 9 09:56:14.682598 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Feb 9 09:56:14.713438 systemd[1]: Starting disk-uuid.service... Feb 9 09:56:14.743050 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 9 09:56:14.753541 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 9 09:56:14.821536 kernel: mlx5_core c787:00:02.0: MLX5E: StrdRq(1) RqSz(8) StrdSz(2048) RxCqeCmprss(0) Feb 9 09:56:14.895678 kernel: hv_netvsc 002248bb-d916-0022-48bb-d916002248bb eth0: VF registering: eth1 Feb 9 09:56:14.895874 kernel: mlx5_core c787:00:02.0 eth1: joined to eth0 Feb 9 09:56:14.929536 kernel: mlx5_core c787:00:02.0 enP51079s1: renamed from eth1 Feb 9 09:56:15.755551 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 9 09:56:15.756015 disk-uuid[595]: The operation has completed successfully. Feb 9 09:56:15.819238 systemd[1]: disk-uuid.service: Deactivated successfully. Feb 9 09:56:15.825000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:56:15.825000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:56:15.819337 systemd[1]: Finished disk-uuid.service. Feb 9 09:56:15.830537 systemd[1]: Starting verity-setup.service... Feb 9 09:56:15.865907 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Feb 9 09:56:15.940229 systemd[1]: Found device dev-mapper-usr.device. Feb 9 09:56:15.946167 systemd[1]: Mounting sysusr-usr.mount... Feb 9 09:56:15.956282 systemd[1]: Finished verity-setup.service. Feb 9 09:56:15.961000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:56:16.015545 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Feb 9 09:56:16.016439 systemd[1]: Mounted sysusr-usr.mount. Feb 9 09:56:16.020818 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Feb 9 09:56:16.021603 systemd[1]: Starting ignition-setup.service... Feb 9 09:56:16.029890 systemd[1]: Starting parse-ip-for-networkd.service... Feb 9 09:56:16.071134 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Feb 9 09:56:16.071198 kernel: BTRFS info (device sda6): using free space tree Feb 9 09:56:16.080718 kernel: BTRFS info (device sda6): has skinny extents Feb 9 09:56:16.108125 systemd[1]: mnt-oem.mount: Deactivated successfully. Feb 9 09:56:16.144955 systemd[1]: Finished parse-ip-for-networkd.service. Feb 9 09:56:16.149000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:56:16.154000 audit: BPF prog-id=9 op=LOAD Feb 9 09:56:16.156730 systemd[1]: Starting systemd-networkd.service... Feb 9 09:56:16.168000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:56:16.164221 systemd[1]: Finished ignition-setup.service. Feb 9 09:56:16.170209 systemd[1]: Starting ignition-fetch-offline.service... Feb 9 09:56:16.196910 systemd-networkd[847]: lo: Link UP Feb 9 09:56:16.196918 systemd-networkd[847]: lo: Gained carrier Feb 9 09:56:16.206000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:56:16.197292 systemd-networkd[847]: Enumeration completed Feb 9 09:56:16.198129 systemd-networkd[847]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 9 09:56:16.198393 systemd[1]: Started systemd-networkd.service. Feb 9 09:56:16.207562 systemd[1]: Reached target network.target. Feb 9 09:56:16.244000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:56:16.217083 systemd[1]: Starting iscsiuio.service... Feb 9 09:56:16.235633 systemd[1]: Started iscsiuio.service. Feb 9 09:56:16.245958 systemd[1]: Starting iscsid.service... Feb 9 09:56:16.271682 iscsid[854]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Feb 9 09:56:16.271682 iscsid[854]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Feb 9 09:56:16.271682 iscsid[854]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Feb 9 09:56:16.271682 iscsid[854]: If using hardware iscsi like qla4xxx this message can be ignored. Feb 9 09:56:16.271682 iscsid[854]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Feb 9 09:56:16.271682 iscsid[854]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Feb 9 09:56:16.280000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:56:16.325000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:56:16.267141 systemd[1]: Started iscsid.service. Feb 9 09:56:16.282266 systemd[1]: Starting dracut-initqueue.service... Feb 9 09:56:16.388000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:56:16.320288 systemd[1]: Finished dracut-initqueue.service. Feb 9 09:56:16.325593 systemd[1]: Reached target remote-fs-pre.target. Feb 9 09:56:16.333015 systemd[1]: Reached target remote-cryptsetup.target. Feb 9 09:56:16.344597 systemd[1]: Reached target remote-fs.target. Feb 9 09:56:16.357309 systemd[1]: Starting dracut-pre-mount.service... Feb 9 09:56:16.383846 systemd[1]: Finished dracut-pre-mount.service. Feb 9 09:56:16.433537 kernel: mlx5_core c787:00:02.0 enP51079s1: Link up Feb 9 09:56:16.474589 kernel: hv_netvsc 002248bb-d916-0022-48bb-d916002248bb eth0: Data path switched to VF: enP51079s1 Feb 9 09:56:16.481662 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Feb 9 09:56:16.481251 systemd-networkd[847]: enP51079s1: Link UP Feb 9 09:56:16.481328 systemd-networkd[847]: eth0: Link UP Feb 9 09:56:16.481445 systemd-networkd[847]: eth0: Gained carrier Feb 9 09:56:16.491731 systemd-networkd[847]: enP51079s1: Gained carrier Feb 9 09:56:16.503603 systemd-networkd[847]: eth0: DHCPv4 address 10.200.20.12/24, gateway 10.200.20.1 acquired from 168.63.129.16 Feb 9 09:56:16.899186 ignition[849]: Ignition 2.14.0 Feb 9 09:56:16.899197 ignition[849]: Stage: fetch-offline Feb 9 09:56:16.899250 ignition[849]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 9 09:56:16.899273 ignition[849]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Feb 9 09:56:16.941968 ignition[849]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 9 09:56:16.942134 ignition[849]: parsed url from cmdline: "" Feb 9 09:56:16.949242 systemd[1]: Finished ignition-fetch-offline.service. Feb 9 09:56:16.953000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:56:16.942138 ignition[849]: no config URL provided Feb 9 09:56:16.955139 systemd[1]: Starting ignition-fetch.service... Feb 9 09:56:16.942144 ignition[849]: reading system config file "/usr/lib/ignition/user.ign" Feb 9 09:56:16.942152 ignition[849]: no config at "/usr/lib/ignition/user.ign" Feb 9 09:56:16.942157 ignition[849]: failed to fetch config: resource requires networking Feb 9 09:56:16.942378 ignition[849]: Ignition finished successfully Feb 9 09:56:16.966768 ignition[873]: Ignition 2.14.0 Feb 9 09:56:16.966774 ignition[873]: Stage: fetch Feb 9 09:56:16.966874 ignition[873]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 9 09:56:16.966892 ignition[873]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Feb 9 09:56:16.969669 ignition[873]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 9 09:56:16.974134 ignition[873]: parsed url from cmdline: "" Feb 9 09:56:16.974144 ignition[873]: no config URL provided Feb 9 09:56:16.974151 ignition[873]: reading system config file "/usr/lib/ignition/user.ign" Feb 9 09:56:16.974163 ignition[873]: no config at "/usr/lib/ignition/user.ign" Feb 9 09:56:16.974201 ignition[873]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Feb 9 09:56:17.075135 ignition[873]: GET result: OK Feb 9 09:56:17.075287 ignition[873]: config has been read from IMDS userdata Feb 9 09:56:17.075360 ignition[873]: parsing config with SHA512: 5ca86a22877a7ed65913bbfb6ff6e9c86bf6b8eda80d98ff1bbc0ab538297662cf05f4c12e20a80472ece98f24fbacca14f077b45cb8ed1f28c5ee65d01cc7bc Feb 9 09:56:17.142807 unknown[873]: fetched base config from "system" Feb 9 09:56:17.142819 unknown[873]: fetched base config from "system" Feb 9 09:56:17.143573 ignition[873]: fetch: fetch complete Feb 9 09:56:17.157000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:56:17.142824 unknown[873]: fetched user config from "azure" Feb 9 09:56:17.199284 kernel: kauditd_printk_skb: 19 callbacks suppressed Feb 9 09:56:17.199322 kernel: audit: type=1130 audit(1707472577.157:30): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:56:17.143579 ignition[873]: fetch: fetch passed Feb 9 09:56:17.152842 systemd[1]: Finished ignition-fetch.service. Feb 9 09:56:17.143625 ignition[873]: Ignition finished successfully Feb 9 09:56:17.167788 systemd[1]: Starting ignition-kargs.service... Feb 9 09:56:17.250889 kernel: audit: type=1130 audit(1707472577.221:31): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:56:17.221000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:56:17.197875 ignition[879]: Ignition 2.14.0 Feb 9 09:56:17.216224 systemd[1]: Finished ignition-kargs.service. Feb 9 09:56:17.197882 ignition[879]: Stage: kargs Feb 9 09:56:17.265000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:56:17.222489 systemd[1]: Starting ignition-disks.service... Feb 9 09:56:17.197995 ignition[879]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 9 09:56:17.255634 systemd[1]: Finished ignition-disks.service. Feb 9 09:56:17.315362 kernel: audit: type=1130 audit(1707472577.265:32): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:56:17.198015 ignition[879]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Feb 9 09:56:17.288017 systemd[1]: Reached target initrd-root-device.target. Feb 9 09:56:17.200664 ignition[879]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 9 09:56:17.298622 systemd[1]: Reached target local-fs-pre.target. Feb 9 09:56:17.209888 ignition[879]: kargs: kargs passed Feb 9 09:56:17.311014 systemd[1]: Reached target local-fs.target. Feb 9 09:56:17.209943 ignition[879]: Ignition finished successfully Feb 9 09:56:17.320703 systemd[1]: Reached target sysinit.target. Feb 9 09:56:17.236257 ignition[885]: Ignition 2.14.0 Feb 9 09:56:17.330097 systemd[1]: Reached target basic.target. Feb 9 09:56:17.236263 ignition[885]: Stage: disks Feb 9 09:56:17.346015 systemd[1]: Starting systemd-fsck-root.service... Feb 9 09:56:17.390928 systemd-fsck[893]: ROOT: clean, 602/7326000 files, 481069/7359488 blocks Feb 9 09:56:17.236359 ignition[885]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 9 09:56:17.430505 kernel: audit: type=1130 audit(1707472577.405:33): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:56:17.405000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:56:17.391336 systemd[1]: Finished systemd-fsck-root.service. Feb 9 09:56:17.236376 ignition[885]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Feb 9 09:56:17.428125 systemd[1]: Mounting sysroot.mount... Feb 9 09:56:17.248995 ignition[885]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 9 09:56:17.251110 ignition[885]: disks: disks passed Feb 9 09:56:17.251162 ignition[885]: Ignition finished successfully Feb 9 09:56:17.471538 kernel: EXT4-fs (sda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Feb 9 09:56:17.471916 systemd[1]: Mounted sysroot.mount. Feb 9 09:56:17.476190 systemd[1]: Reached target initrd-root-fs.target. Feb 9 09:56:17.494489 systemd[1]: Mounting sysroot-usr.mount... Feb 9 09:56:17.499872 systemd[1]: Starting flatcar-metadata-hostname.service... Feb 9 09:56:17.508558 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Feb 9 09:56:17.508595 systemd[1]: Reached target ignition-diskful.target. Feb 9 09:56:17.514966 systemd[1]: Mounted sysroot-usr.mount. Feb 9 09:56:17.534912 systemd[1]: Mounting sysroot-usr-share-oem.mount... Feb 9 09:56:17.544592 systemd[1]: Starting initrd-setup-root.service... Feb 9 09:56:17.566364 initrd-setup-root[908]: cut: /sysroot/etc/passwd: No such file or directory Feb 9 09:56:17.582782 initrd-setup-root[916]: cut: /sysroot/etc/group: No such file or directory Feb 9 09:56:17.601708 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by mount (903) Feb 9 09:56:17.601741 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Feb 9 09:56:17.601757 kernel: BTRFS info (device sda6): using free space tree Feb 9 09:56:17.601766 initrd-setup-root[924]: cut: /sysroot/etc/shadow: No such file or directory Feb 9 09:56:17.616359 kernel: BTRFS info (device sda6): has skinny extents Feb 9 09:56:17.616383 initrd-setup-root[932]: cut: /sysroot/etc/gshadow: No such file or directory Feb 9 09:56:17.631608 systemd[1]: Mounted sysroot-usr-share-oem.mount. Feb 9 09:56:17.749673 systemd[1]: Finished initrd-setup-root.service. Feb 9 09:56:17.754000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:56:17.755640 systemd[1]: Starting ignition-mount.service... Feb 9 09:56:17.783653 kernel: audit: type=1130 audit(1707472577.754:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:56:17.787433 systemd[1]: Starting sysroot-boot.service... Feb 9 09:56:17.792377 systemd[1]: sysusr-usr-share-oem.mount: Deactivated successfully. Feb 9 09:56:17.792500 systemd[1]: sysroot-usr-share-oem.mount: Deactivated successfully. Feb 9 09:56:17.818998 ignition[970]: INFO : Ignition 2.14.0 Feb 9 09:56:17.818998 ignition[970]: INFO : Stage: mount Feb 9 09:56:17.832117 ignition[970]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 9 09:56:17.832117 ignition[970]: DEBUG : parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Feb 9 09:56:17.832117 ignition[970]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 9 09:56:17.832117 ignition[970]: INFO : mount: mount passed Feb 9 09:56:17.832117 ignition[970]: INFO : Ignition finished successfully Feb 9 09:56:17.916248 kernel: audit: type=1130 audit(1707472577.843:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:56:17.916275 kernel: audit: type=1130 audit(1707472577.894:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:56:17.843000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:56:17.894000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:56:17.839247 systemd[1]: Finished ignition-mount.service. Feb 9 09:56:17.886814 systemd[1]: Finished sysroot-boot.service. Feb 9 09:56:18.019316 coreos-metadata[902]: Feb 09 09:56:18.019 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Feb 9 09:56:18.028269 coreos-metadata[902]: Feb 09 09:56:18.027 INFO Fetch successful Feb 9 09:56:18.060954 coreos-metadata[902]: Feb 09 09:56:18.060 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Feb 9 09:56:18.088769 coreos-metadata[902]: Feb 09 09:56:18.088 INFO Fetch successful Feb 9 09:56:18.094389 coreos-metadata[902]: Feb 09 09:56:18.094 INFO wrote hostname ci-3510.3.2-a-d10cdd880c to /sysroot/etc/hostname Feb 9 09:56:18.103631 systemd[1]: Finished flatcar-metadata-hostname.service. Feb 9 09:56:18.108000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:56:18.136568 kernel: audit: type=1130 audit(1707472578.108:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:56:18.129378 systemd[1]: Starting ignition-files.service... Feb 9 09:56:18.143662 systemd[1]: Mounting sysroot-usr-share-oem.mount... Feb 9 09:56:18.163618 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by mount (982) Feb 9 09:56:18.178030 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Feb 9 09:56:18.178069 kernel: BTRFS info (device sda6): using free space tree Feb 9 09:56:18.184380 kernel: BTRFS info (device sda6): has skinny extents Feb 9 09:56:18.188511 systemd[1]: Mounted sysroot-usr-share-oem.mount. Feb 9 09:56:18.203318 ignition[1001]: INFO : Ignition 2.14.0 Feb 9 09:56:18.203318 ignition[1001]: INFO : Stage: files Feb 9 09:56:18.214869 ignition[1001]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 9 09:56:18.214869 ignition[1001]: DEBUG : parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Feb 9 09:56:18.214869 ignition[1001]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 9 09:56:18.250718 ignition[1001]: DEBUG : files: compiled without relabeling support, skipping Feb 9 09:56:18.250718 ignition[1001]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Feb 9 09:56:18.250718 ignition[1001]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Feb 9 09:56:18.250718 ignition[1001]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Feb 9 09:56:18.250718 ignition[1001]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Feb 9 09:56:18.250718 ignition[1001]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Feb 9 09:56:18.250718 ignition[1001]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/crictl-v1.26.0-linux-arm64.tar.gz" Feb 9 09:56:18.250718 ignition[1001]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.26.0/crictl-v1.26.0-linux-arm64.tar.gz: attempt #1 Feb 9 09:56:18.245224 unknown[1001]: wrote ssh authorized keys file for user: core Feb 9 09:56:18.267696 systemd-networkd[847]: eth0: Gained IPv6LL Feb 9 09:56:18.587053 ignition[1001]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Feb 9 09:56:18.744454 ignition[1001]: DEBUG : files: createFilesystemsFiles: createFiles: op(3): file matches expected sum of: 4c7e4541123cbd6f1d6fec1f827395cd58d65716c0998de790f965485738b6d6257c0dc46fd7f66403166c299f6d5bf9ff30b6e1ff9afbb071f17005e834518c Feb 9 09:56:18.763116 ignition[1001]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/crictl-v1.26.0-linux-arm64.tar.gz" Feb 9 09:56:18.763116 ignition[1001]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Feb 9 09:56:18.763116 ignition[1001]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Feb 9 09:56:18.900008 ignition[1001]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Feb 9 09:56:19.118873 ignition[1001]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Feb 9 09:56:19.132200 ignition[1001]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Feb 9 09:56:19.132200 ignition[1001]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Feb 9 09:56:19.132200 ignition[1001]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/cni-plugins-linux-arm64-v1.1.1.tgz" Feb 9 09:56:19.132200 ignition[1001]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET https://github.com/containernetworking/plugins/releases/download/v1.1.1/cni-plugins-linux-arm64-v1.1.1.tgz: attempt #1 Feb 9 09:56:19.397890 ignition[1001]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET result: OK Feb 9 09:56:19.597633 ignition[1001]: DEBUG : files: createFilesystemsFiles: createFiles: op(6): file matches expected sum of: 6b5df61a53601926e4b5a9174828123d555f592165439f541bc117c68781f41c8bd30dccd52367e406d104df849bcbcfb72d9c4bafda4b045c59ce95d0ca0742 Feb 9 09:56:19.615271 ignition[1001]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/cni-plugins-linux-arm64-v1.1.1.tgz" Feb 9 09:56:19.615271 ignition[1001]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/opt/bin/kubelet" Feb 9 09:56:19.615271 ignition[1001]: INFO : files: createFilesystemsFiles: createFiles: op(7): GET https://dl.k8s.io/release/v1.26.5/bin/linux/arm64/kubelet: attempt #1 Feb 9 09:56:19.788228 ignition[1001]: INFO : files: createFilesystemsFiles: createFiles: op(7): GET result: OK Feb 9 09:56:20.563751 ignition[1001]: DEBUG : files: createFilesystemsFiles: createFiles: op(7): file matches expected sum of: 0e4ee1f23bf768c49d09beb13a6b5fad6efc8e3e685e7c5610188763e3af55923fb46158b5e76973a0f9a055f9b30d525b467c53415f965536adc2f04d9cf18d Feb 9 09:56:20.583037 ignition[1001]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/opt/bin/kubelet" Feb 9 09:56:20.583037 ignition[1001]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/opt/bin/kubeadm" Feb 9 09:56:20.583037 ignition[1001]: INFO : files: createFilesystemsFiles: createFiles: op(8): GET https://dl.k8s.io/release/v1.26.5/bin/linux/arm64/kubeadm: attempt #1 Feb 9 09:56:20.622451 ignition[1001]: INFO : files: createFilesystemsFiles: createFiles: op(8): GET result: OK Feb 9 09:56:20.913253 ignition[1001]: DEBUG : files: createFilesystemsFiles: createFiles: op(8): file matches expected sum of: 46c9f489062bdb84574703f7339d140d7e42c9c71b367cd860071108a3c1d38fabda2ef69f9c0ff88f7c80e88d38f96ab2248d4c9a6c9c60b0a4c20fd640d0db Feb 9 09:56:20.929936 ignition[1001]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/opt/bin/kubeadm" Feb 9 09:56:20.929936 ignition[1001]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/opt/bin/kubectl" Feb 9 09:56:20.929936 ignition[1001]: INFO : files: createFilesystemsFiles: createFiles: op(9): GET https://dl.k8s.io/release/v1.26.5/bin/linux/arm64/kubectl: attempt #1 Feb 9 09:56:20.988501 ignition[1001]: INFO : files: createFilesystemsFiles: createFiles: op(9): GET result: OK Feb 9 09:56:21.289107 ignition[1001]: DEBUG : files: createFilesystemsFiles: createFiles: op(9): file matches expected sum of: 3672fda0beebbbd636a2088f427463cbad32683ea4fbb1df61650552e63846b6a47db803ccb70c3db0a8f24746a23a5632bdc15a3fb78f4f7d833e7f86763c2a Feb 9 09:56:21.307320 ignition[1001]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/opt/bin/kubectl" Feb 9 09:56:21.307320 ignition[1001]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/etc/docker/daemon.json" Feb 9 09:56:21.307320 ignition[1001]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/etc/docker/daemon.json" Feb 9 09:56:21.307320 ignition[1001]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/home/core/install.sh" Feb 9 09:56:21.307320 ignition[1001]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/home/core/install.sh" Feb 9 09:56:21.307320 ignition[1001]: INFO : files: createFilesystemsFiles: createFiles: op(c): [started] writing file "/sysroot/home/core/nginx.yaml" Feb 9 09:56:21.307320 ignition[1001]: INFO : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/home/core/nginx.yaml" Feb 9 09:56:21.307320 ignition[1001]: INFO : files: createFilesystemsFiles: createFiles: op(d): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 9 09:56:21.307320 ignition[1001]: INFO : files: createFilesystemsFiles: createFiles: op(d): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 9 09:56:21.307320 ignition[1001]: INFO : files: createFilesystemsFiles: createFiles: op(e): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 9 09:56:21.307320 ignition[1001]: INFO : files: createFilesystemsFiles: createFiles: op(e): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 9 09:56:21.307320 ignition[1001]: INFO : files: createFilesystemsFiles: createFiles: op(f): [started] writing file "/sysroot/etc/flatcar/update.conf" Feb 9 09:56:21.307320 ignition[1001]: INFO : files: createFilesystemsFiles: createFiles: op(f): [finished] writing file "/sysroot/etc/flatcar/update.conf" Feb 9 09:56:21.307320 ignition[1001]: INFO : files: createFilesystemsFiles: createFiles: op(10): [started] writing file "/sysroot/etc/systemd/system/waagent.service" Feb 9 09:56:21.307320 ignition[1001]: INFO : files: createFilesystemsFiles: createFiles: op(10): oem config not found in "/usr/share/oem", looking on oem partition Feb 9 09:56:21.557250 kernel: BTRFS info: devid 1 device path /dev/sda6 changed to /dev/disk/by-label/OEM scanned by ignition (1004) Feb 9 09:56:21.557276 kernel: audit: type=1130 audit(1707472581.437:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:56:21.557288 kernel: audit: type=1130 audit(1707472581.528:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:56:21.437000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:56:21.528000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:56:21.545000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:56:21.387187 systemd[1]: mnt-oem3678514408.mount: Deactivated successfully. Feb 9 09:56:21.563000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:56:21.569652 ignition[1001]: INFO : files: createFilesystemsFiles: createFiles: op(10): op(11): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3678514408" Feb 9 09:56:21.569652 ignition[1001]: CRITICAL : files: createFilesystemsFiles: createFiles: op(10): op(11): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3678514408": device or resource busy Feb 9 09:56:21.569652 ignition[1001]: ERROR : files: createFilesystemsFiles: createFiles: op(10): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem3678514408", trying btrfs: device or resource busy Feb 9 09:56:21.569652 ignition[1001]: INFO : files: createFilesystemsFiles: createFiles: op(10): op(12): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3678514408" Feb 9 09:56:21.569652 ignition[1001]: INFO : files: createFilesystemsFiles: createFiles: op(10): op(12): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3678514408" Feb 9 09:56:21.569652 ignition[1001]: INFO : files: createFilesystemsFiles: createFiles: op(10): op(13): [started] unmounting "/mnt/oem3678514408" Feb 9 09:56:21.569652 ignition[1001]: INFO : files: createFilesystemsFiles: createFiles: op(10): op(13): [finished] unmounting "/mnt/oem3678514408" Feb 9 09:56:21.569652 ignition[1001]: INFO : files: createFilesystemsFiles: createFiles: op(10): [finished] writing file "/sysroot/etc/systemd/system/waagent.service" Feb 9 09:56:21.569652 ignition[1001]: INFO : files: createFilesystemsFiles: createFiles: op(14): [started] writing file "/sysroot/etc/systemd/system/nvidia.service" Feb 9 09:56:21.569652 ignition[1001]: INFO : files: createFilesystemsFiles: createFiles: op(14): oem config not found in "/usr/share/oem", looking on oem partition Feb 9 09:56:21.569652 ignition[1001]: INFO : files: createFilesystemsFiles: createFiles: op(14): op(15): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3869246437" Feb 9 09:56:21.569652 ignition[1001]: CRITICAL : files: createFilesystemsFiles: createFiles: op(14): op(15): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3869246437": device or resource busy Feb 9 09:56:21.569652 ignition[1001]: ERROR : files: createFilesystemsFiles: createFiles: op(14): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem3869246437", trying btrfs: device or resource busy Feb 9 09:56:21.569652 ignition[1001]: INFO : files: createFilesystemsFiles: createFiles: op(14): op(16): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3869246437" Feb 9 09:56:21.627000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:56:21.627000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:56:21.748000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:56:21.413018 systemd[1]: mnt-oem3869246437.mount: Deactivated successfully. Feb 9 09:56:21.809126 ignition[1001]: INFO : files: createFilesystemsFiles: createFiles: op(14): op(16): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3869246437" Feb 9 09:56:21.809126 ignition[1001]: INFO : files: createFilesystemsFiles: createFiles: op(14): op(17): [started] unmounting "/mnt/oem3869246437" Feb 9 09:56:21.809126 ignition[1001]: INFO : files: createFilesystemsFiles: createFiles: op(14): op(17): [finished] unmounting "/mnt/oem3869246437" Feb 9 09:56:21.809126 ignition[1001]: INFO : files: createFilesystemsFiles: createFiles: op(14): [finished] writing file "/sysroot/etc/systemd/system/nvidia.service" Feb 9 09:56:21.809126 ignition[1001]: INFO : files: op(18): [started] processing unit "nvidia.service" Feb 9 09:56:21.809126 ignition[1001]: INFO : files: op(18): [finished] processing unit "nvidia.service" Feb 9 09:56:21.809126 ignition[1001]: INFO : files: op(19): [started] processing unit "waagent.service" Feb 9 09:56:21.809126 ignition[1001]: INFO : files: op(19): [finished] processing unit "waagent.service" Feb 9 09:56:21.809126 ignition[1001]: INFO : files: op(1a): [started] processing unit "prepare-helm.service" Feb 9 09:56:21.809126 ignition[1001]: INFO : files: op(1a): op(1b): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 9 09:56:21.809126 ignition[1001]: INFO : files: op(1a): op(1b): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 9 09:56:21.809126 ignition[1001]: INFO : files: op(1a): [finished] processing unit "prepare-helm.service" Feb 9 09:56:21.809126 ignition[1001]: INFO : files: op(1c): [started] processing unit "containerd.service" Feb 9 09:56:21.809126 ignition[1001]: INFO : files: op(1c): op(1d): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Feb 9 09:56:21.809126 ignition[1001]: INFO : files: op(1c): op(1d): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Feb 9 09:56:21.809126 ignition[1001]: INFO : files: op(1c): [finished] processing unit "containerd.service" Feb 9 09:56:21.809126 ignition[1001]: INFO : files: op(1e): [started] processing unit "prepare-cni-plugins.service" Feb 9 09:56:21.809126 ignition[1001]: INFO : files: op(1e): op(1f): [started] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Feb 9 09:56:21.846000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:56:22.023000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:56:22.053000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:56:22.067000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:56:21.424433 systemd[1]: Finished ignition-files.service. Feb 9 09:56:22.084000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:56:22.089502 ignition[1001]: INFO : files: op(1e): op(1f): [finished] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Feb 9 09:56:22.089502 ignition[1001]: INFO : files: op(1e): [finished] processing unit "prepare-cni-plugins.service" Feb 9 09:56:22.089502 ignition[1001]: INFO : files: op(20): [started] processing unit "prepare-critools.service" Feb 9 09:56:22.089502 ignition[1001]: INFO : files: op(20): op(21): [started] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Feb 9 09:56:22.089502 ignition[1001]: INFO : files: op(20): op(21): [finished] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Feb 9 09:56:22.089502 ignition[1001]: INFO : files: op(20): [finished] processing unit "prepare-critools.service" Feb 9 09:56:22.089502 ignition[1001]: INFO : files: op(22): [started] setting preset to enabled for "prepare-helm.service" Feb 9 09:56:22.089502 ignition[1001]: INFO : files: op(22): [finished] setting preset to enabled for "prepare-helm.service" Feb 9 09:56:22.089502 ignition[1001]: INFO : files: op(23): [started] setting preset to enabled for "prepare-cni-plugins.service" Feb 9 09:56:22.089502 ignition[1001]: INFO : files: op(23): [finished] setting preset to enabled for "prepare-cni-plugins.service" Feb 9 09:56:22.089502 ignition[1001]: INFO : files: op(24): [started] setting preset to enabled for "prepare-critools.service" Feb 9 09:56:22.089502 ignition[1001]: INFO : files: op(24): [finished] setting preset to enabled for "prepare-critools.service" Feb 9 09:56:22.089502 ignition[1001]: INFO : files: op(25): [started] setting preset to enabled for "nvidia.service" Feb 9 09:56:22.089502 ignition[1001]: INFO : files: op(25): [finished] setting preset to enabled for "nvidia.service" Feb 9 09:56:22.089502 ignition[1001]: INFO : files: op(26): [started] setting preset to enabled for "waagent.service" Feb 9 09:56:22.089502 ignition[1001]: INFO : files: op(26): [finished] setting preset to enabled for "waagent.service" Feb 9 09:56:22.089502 ignition[1001]: INFO : files: createResultFile: createFiles: op(27): [started] writing file "/sysroot/etc/.ignition-result.json" Feb 9 09:56:22.089502 ignition[1001]: INFO : files: createResultFile: createFiles: op(27): [finished] writing file "/sysroot/etc/.ignition-result.json" Feb 9 09:56:22.089502 ignition[1001]: INFO : files: files passed Feb 9 09:56:22.089502 ignition[1001]: INFO : Ignition finished successfully Feb 9 09:56:22.535361 kernel: kauditd_printk_skb: 13 callbacks suppressed Feb 9 09:56:22.535389 kernel: audit: type=1131 audit(1707472582.187:53): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:56:22.535401 kernel: audit: type=1130 audit(1707472582.225:54): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:56:22.535411 kernel: audit: type=1131 audit(1707472582.225:55): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:56:22.535420 kernel: audit: type=1131 audit(1707472582.258:56): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:56:22.535435 kernel: audit: type=1131 audit(1707472582.290:57): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:56:22.535446 kernel: audit: type=1131 audit(1707472582.295:58): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:56:22.535457 kernel: audit: type=1131 audit(1707472582.359:59): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:56:22.535467 kernel: audit: type=1131 audit(1707472582.492:60): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:56:22.094000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:56:22.137000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:56:22.144000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:56:22.187000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:56:22.225000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:56:22.225000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:56:22.258000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:56:22.290000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:56:22.295000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:56:22.359000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:56:22.492000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:56:22.535750 initrd-setup-root-after-ignition[1026]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 9 09:56:21.438767 systemd[1]: Starting initrd-setup-root-after-ignition.service... Feb 9 09:56:22.580206 kernel: audit: type=1131 audit(1707472582.554:61): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:56:22.554000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:56:21.469772 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Feb 9 09:56:22.618900 kernel: audit: type=1131 audit(1707472582.585:62): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:56:22.585000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:56:21.481080 systemd[1]: Starting ignition-quench.service... Feb 9 09:56:22.625000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:56:22.625000 audit: BPF prog-id=6 op=UNLOAD Feb 9 09:56:21.510592 systemd[1]: ignition-quench.service: Deactivated successfully. Feb 9 09:56:21.510704 systemd[1]: Finished ignition-quench.service. Feb 9 09:56:22.654000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:56:21.548789 systemd[1]: Finished initrd-setup-root-after-ignition.service. Feb 9 09:56:21.563510 systemd[1]: Reached target ignition-complete.target. Feb 9 09:56:22.680558 ignition[1039]: INFO : Ignition 2.14.0 Feb 9 09:56:22.680558 ignition[1039]: INFO : Stage: umount Feb 9 09:56:22.680558 ignition[1039]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 9 09:56:22.680558 ignition[1039]: DEBUG : parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Feb 9 09:56:22.680558 ignition[1039]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 9 09:56:22.680558 ignition[1039]: INFO : umount: umount passed Feb 9 09:56:22.680558 ignition[1039]: INFO : Ignition finished successfully Feb 9 09:56:22.685000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:56:22.696000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:56:22.707000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:56:22.742000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:56:21.587893 systemd[1]: Starting initrd-parse-etc.service... Feb 9 09:56:22.779000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:56:21.618210 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Feb 9 09:56:22.789000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:56:21.618317 systemd[1]: Finished initrd-parse-etc.service. Feb 9 09:56:22.800000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:56:21.628067 systemd[1]: Reached target initrd-fs.target. Feb 9 09:56:22.829276 kernel: hv_netvsc 002248bb-d916-0022-48bb-d916002248bb eth0: Data path switched from VF: enP51079s1 Feb 9 09:56:22.824000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:56:21.644633 systemd[1]: Reached target initrd.target. Feb 9 09:56:22.834000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:56:21.663290 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Feb 9 09:56:22.843000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:56:21.664296 systemd[1]: Starting dracut-pre-pivot.service... Feb 9 09:56:22.856000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:56:22.856000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:56:21.743684 systemd[1]: Finished dracut-pre-pivot.service. Feb 9 09:56:21.750493 systemd[1]: Starting initrd-cleanup.service... Feb 9 09:56:21.787076 systemd[1]: Stopped target nss-lookup.target. Feb 9 09:56:21.803223 systemd[1]: Stopped target remote-cryptsetup.target. Feb 9 09:56:21.814793 systemd[1]: Stopped target timers.target. Feb 9 09:56:21.831853 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Feb 9 09:56:21.831970 systemd[1]: Stopped dracut-pre-pivot.service. Feb 9 09:56:21.847322 systemd[1]: Stopped target initrd.target. Feb 9 09:56:21.861841 systemd[1]: Stopped target basic.target. Feb 9 09:56:21.878387 systemd[1]: Stopped target ignition-complete.target. Feb 9 09:56:21.891608 systemd[1]: Stopped target ignition-diskful.target. Feb 9 09:56:21.906457 systemd[1]: Stopped target initrd-root-device.target. Feb 9 09:56:21.919384 systemd[1]: Stopped target remote-fs.target. Feb 9 09:56:21.931666 systemd[1]: Stopped target remote-fs-pre.target. Feb 9 09:56:22.929000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:56:21.944691 systemd[1]: Stopped target sysinit.target. Feb 9 09:56:21.962439 systemd[1]: Stopped target local-fs.target. Feb 9 09:56:21.980084 systemd[1]: Stopped target local-fs-pre.target. Feb 9 09:56:21.993527 systemd[1]: Stopped target swap.target. Feb 9 09:56:22.004919 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Feb 9 09:56:22.965000 audit: BPF prog-id=8 op=UNLOAD Feb 9 09:56:22.965000 audit: BPF prog-id=7 op=UNLOAD Feb 9 09:56:22.965000 audit: BPF prog-id=5 op=UNLOAD Feb 9 09:56:22.965000 audit: BPF prog-id=4 op=UNLOAD Feb 9 09:56:22.965000 audit: BPF prog-id=3 op=UNLOAD Feb 9 09:56:22.005029 systemd[1]: Stopped dracut-pre-mount.service. Feb 9 09:56:22.023383 systemd[1]: Stopped target cryptsetup.target. Feb 9 09:56:22.042446 systemd[1]: dracut-initqueue.service: Deactivated successfully. Feb 9 09:56:22.042555 systemd[1]: Stopped dracut-initqueue.service. Feb 9 09:56:22.054065 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Feb 9 09:56:22.054159 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Feb 9 09:56:23.013202 systemd-journald[276]: Received SIGTERM from PID 1 (n/a). Feb 9 09:56:23.013237 iscsid[854]: iscsid shutting down. Feb 9 09:56:22.067861 systemd[1]: ignition-files.service: Deactivated successfully. Feb 9 09:56:22.067944 systemd[1]: Stopped ignition-files.service. Feb 9 09:56:22.085207 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Feb 9 09:56:22.085294 systemd[1]: Stopped flatcar-metadata-hostname.service. Feb 9 09:56:22.095944 systemd[1]: Stopping ignition-mount.service... Feb 9 09:56:22.117952 systemd[1]: Stopping sysroot-boot.service... Feb 9 09:56:22.123232 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Feb 9 09:56:22.123453 systemd[1]: Stopped systemd-udev-trigger.service. Feb 9 09:56:22.138212 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Feb 9 09:56:22.138351 systemd[1]: Stopped dracut-pre-trigger.service. Feb 9 09:56:22.154605 systemd[1]: ignition-mount.service: Deactivated successfully. Feb 9 09:56:22.154713 systemd[1]: Stopped ignition-mount.service. Feb 9 09:56:22.189288 systemd[1]: sysroot-boot.mount: Deactivated successfully. Feb 9 09:56:22.189774 systemd[1]: initrd-cleanup.service: Deactivated successfully. Feb 9 09:56:22.189869 systemd[1]: Finished initrd-cleanup.service. Feb 9 09:56:22.226500 systemd[1]: ignition-disks.service: Deactivated successfully. Feb 9 09:56:22.226583 systemd[1]: Stopped ignition-disks.service. Feb 9 09:56:22.258594 systemd[1]: ignition-kargs.service: Deactivated successfully. Feb 9 09:56:22.258651 systemd[1]: Stopped ignition-kargs.service. Feb 9 09:56:22.290999 systemd[1]: ignition-fetch.service: Deactivated successfully. Feb 9 09:56:22.291052 systemd[1]: Stopped ignition-fetch.service. Feb 9 09:56:22.296173 systemd[1]: Stopped target network.target. Feb 9 09:56:22.327696 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Feb 9 09:56:22.327761 systemd[1]: Stopped ignition-fetch-offline.service. Feb 9 09:56:22.359902 systemd[1]: Stopped target paths.target. Feb 9 09:56:22.422588 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Feb 9 09:56:22.430490 systemd[1]: Stopped systemd-ask-password-console.path. Feb 9 09:56:22.436261 systemd[1]: Stopped target slices.target. Feb 9 09:56:22.448461 systemd[1]: Stopped target sockets.target. Feb 9 09:56:22.453227 systemd[1]: iscsid.socket: Deactivated successfully. Feb 9 09:56:22.453275 systemd[1]: Closed iscsid.socket. Feb 9 09:56:22.465530 systemd[1]: iscsiuio.socket: Deactivated successfully. Feb 9 09:56:22.465551 systemd[1]: Closed iscsiuio.socket. Feb 9 09:56:22.477693 systemd[1]: ignition-setup.service: Deactivated successfully. Feb 9 09:56:22.477737 systemd[1]: Stopped ignition-setup.service. Feb 9 09:56:22.521145 systemd[1]: Stopping systemd-networkd.service... Feb 9 09:56:22.529799 systemd[1]: Stopping systemd-resolved.service... Feb 9 09:56:22.534480 systemd-networkd[847]: eth0: DHCPv6 lease lost Feb 9 09:56:23.013000 audit: BPF prog-id=9 op=UNLOAD Feb 9 09:56:22.540201 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 9 09:56:22.540291 systemd[1]: Stopped systemd-networkd.service. Feb 9 09:56:22.576333 systemd[1]: sysroot-boot.service: Deactivated successfully. Feb 9 09:56:22.576423 systemd[1]: Stopped sysroot-boot.service. Feb 9 09:56:22.585917 systemd[1]: systemd-resolved.service: Deactivated successfully. Feb 9 09:56:22.586014 systemd[1]: Stopped systemd-resolved.service. Feb 9 09:56:22.626120 systemd[1]: systemd-networkd.socket: Deactivated successfully. Feb 9 09:56:22.626164 systemd[1]: Closed systemd-networkd.socket. Feb 9 09:56:22.642070 systemd[1]: initrd-setup-root.service: Deactivated successfully. Feb 9 09:56:22.642134 systemd[1]: Stopped initrd-setup-root.service. Feb 9 09:56:22.660900 systemd[1]: Stopping network-cleanup.service... Feb 9 09:56:22.674801 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Feb 9 09:56:22.674886 systemd[1]: Stopped parse-ip-for-networkd.service. Feb 9 09:56:22.686305 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 9 09:56:22.686358 systemd[1]: Stopped systemd-sysctl.service. Feb 9 09:56:22.700918 systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 9 09:56:22.700965 systemd[1]: Stopped systemd-modules-load.service. Feb 9 09:56:22.707630 systemd[1]: Stopping systemd-udevd.service... Feb 9 09:56:22.722763 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Feb 9 09:56:22.723361 systemd[1]: systemd-udevd.service: Deactivated successfully. Feb 9 09:56:22.723478 systemd[1]: Stopped systemd-udevd.service. Feb 9 09:56:22.744078 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Feb 9 09:56:22.744126 systemd[1]: Closed systemd-udevd-control.socket. Feb 9 09:56:22.757205 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Feb 9 09:56:22.757245 systemd[1]: Closed systemd-udevd-kernel.socket. Feb 9 09:56:22.769726 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Feb 9 09:56:22.769775 systemd[1]: Stopped dracut-pre-udev.service. Feb 9 09:56:22.779860 systemd[1]: dracut-cmdline.service: Deactivated successfully. Feb 9 09:56:22.779900 systemd[1]: Stopped dracut-cmdline.service. Feb 9 09:56:22.789656 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 9 09:56:22.789694 systemd[1]: Stopped dracut-cmdline-ask.service. Feb 9 09:56:22.801204 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Feb 9 09:56:22.819606 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Feb 9 09:56:22.819680 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service. Feb 9 09:56:22.829017 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Feb 9 09:56:22.829073 systemd[1]: Stopped kmod-static-nodes.service. Feb 9 09:56:22.834784 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 9 09:56:22.834832 systemd[1]: Stopped systemd-vconsole-setup.service. Feb 9 09:56:22.844761 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Feb 9 09:56:22.845321 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Feb 9 09:56:22.845432 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Feb 9 09:56:22.921032 systemd[1]: network-cleanup.service: Deactivated successfully. Feb 9 09:56:22.921143 systemd[1]: Stopped network-cleanup.service. Feb 9 09:56:22.930307 systemd[1]: Reached target initrd-switch-root.target. Feb 9 09:56:22.941255 systemd[1]: Starting initrd-switch-root.service... Feb 9 09:56:22.965783 systemd[1]: Switching root. Feb 9 09:56:23.015469 systemd-journald[276]: Journal stopped Feb 9 09:56:27.027039 kernel: SELinux: Class mctp_socket not defined in policy. Feb 9 09:56:27.027060 kernel: SELinux: Class anon_inode not defined in policy. Feb 9 09:56:27.027071 kernel: SELinux: the above unknown classes and permissions will be allowed Feb 9 09:56:27.027080 kernel: SELinux: policy capability network_peer_controls=1 Feb 9 09:56:27.027088 kernel: SELinux: policy capability open_perms=1 Feb 9 09:56:27.027096 kernel: SELinux: policy capability extended_socket_class=1 Feb 9 09:56:27.027105 kernel: SELinux: policy capability always_check_network=0 Feb 9 09:56:27.027113 kernel: SELinux: policy capability cgroup_seclabel=1 Feb 9 09:56:27.027121 kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 9 09:56:27.027129 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Feb 9 09:56:27.027138 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Feb 9 09:56:27.027147 systemd[1]: Successfully loaded SELinux policy in 125.853ms. Feb 9 09:56:27.027157 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 12.702ms. Feb 9 09:56:27.027168 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Feb 9 09:56:27.027179 systemd[1]: Detected virtualization microsoft. Feb 9 09:56:27.027188 systemd[1]: Detected architecture arm64. Feb 9 09:56:27.027197 systemd[1]: Detected first boot. Feb 9 09:56:27.027206 systemd[1]: Hostname set to . Feb 9 09:56:27.027216 systemd[1]: Initializing machine ID from random generator. Feb 9 09:56:27.027224 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Feb 9 09:56:27.027233 systemd[1]: Populated /etc with preset unit settings. Feb 9 09:56:27.027242 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 9 09:56:27.027253 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 9 09:56:27.027263 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 9 09:56:27.027356 systemd[1]: Queued start job for default target multi-user.target. Feb 9 09:56:27.027370 systemd[1]: Created slice system-addon\x2dconfig.slice. Feb 9 09:56:27.027380 systemd[1]: Created slice system-addon\x2drun.slice. Feb 9 09:56:27.027389 systemd[1]: Created slice system-getty.slice. Feb 9 09:56:27.027401 systemd[1]: Created slice system-modprobe.slice. Feb 9 09:56:27.027414 systemd[1]: Created slice system-serial\x2dgetty.slice. Feb 9 09:56:27.027423 systemd[1]: Created slice system-system\x2dcloudinit.slice. Feb 9 09:56:27.027432 systemd[1]: Created slice system-systemd\x2dfsck.slice. Feb 9 09:56:27.027441 systemd[1]: Created slice user.slice. Feb 9 09:56:27.027450 systemd[1]: Started systemd-ask-password-console.path. Feb 9 09:56:27.027459 systemd[1]: Started systemd-ask-password-wall.path. Feb 9 09:56:27.027468 systemd[1]: Set up automount boot.automount. Feb 9 09:56:27.027477 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Feb 9 09:56:27.027487 systemd[1]: Reached target integritysetup.target. Feb 9 09:56:27.027496 systemd[1]: Reached target remote-cryptsetup.target. Feb 9 09:56:27.027506 systemd[1]: Reached target remote-fs.target. Feb 9 09:56:27.027517 systemd[1]: Reached target slices.target. Feb 9 09:56:27.027554 systemd[1]: Reached target swap.target. Feb 9 09:56:27.027564 systemd[1]: Reached target torcx.target. Feb 9 09:56:27.027573 systemd[1]: Reached target veritysetup.target. Feb 9 09:56:27.027582 systemd[1]: Listening on systemd-coredump.socket. Feb 9 09:56:27.027593 systemd[1]: Listening on systemd-initctl.socket. Feb 9 09:56:27.027603 systemd[1]: Listening on systemd-journald-audit.socket. Feb 9 09:56:27.027612 systemd[1]: Listening on systemd-journald-dev-log.socket. Feb 9 09:56:27.027621 systemd[1]: Listening on systemd-journald.socket. Feb 9 09:56:27.027631 systemd[1]: Listening on systemd-networkd.socket. Feb 9 09:56:27.027642 systemd[1]: Listening on systemd-udevd-control.socket. Feb 9 09:56:27.027651 systemd[1]: Listening on systemd-udevd-kernel.socket. Feb 9 09:56:27.027662 systemd[1]: Listening on systemd-userdbd.socket. Feb 9 09:56:27.027672 systemd[1]: Mounting dev-hugepages.mount... Feb 9 09:56:27.027681 systemd[1]: Mounting dev-mqueue.mount... Feb 9 09:56:27.027690 systemd[1]: Mounting media.mount... Feb 9 09:56:27.027699 systemd[1]: Mounting sys-kernel-debug.mount... Feb 9 09:56:27.027875 systemd[1]: Mounting sys-kernel-tracing.mount... Feb 9 09:56:27.027888 systemd[1]: Mounting tmp.mount... Feb 9 09:56:27.027900 systemd[1]: Starting flatcar-tmpfiles.service... Feb 9 09:56:27.027910 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Feb 9 09:56:27.027919 systemd[1]: Starting kmod-static-nodes.service... Feb 9 09:56:27.027929 systemd[1]: Starting modprobe@configfs.service... Feb 9 09:56:27.028003 systemd[1]: Starting modprobe@dm_mod.service... Feb 9 09:56:27.028020 systemd[1]: Starting modprobe@drm.service... Feb 9 09:56:27.028030 systemd[1]: Starting modprobe@efi_pstore.service... Feb 9 09:56:27.028039 systemd[1]: Starting modprobe@fuse.service... Feb 9 09:56:27.028066 systemd[1]: Starting modprobe@loop.service... Feb 9 09:56:27.028081 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Feb 9 09:56:27.028091 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Feb 9 09:56:27.028101 systemd[1]: (This warning is only shown for the first unit using IP firewalling.) Feb 9 09:56:27.028110 kernel: fuse: init (API version 7.34) Feb 9 09:56:27.028119 systemd[1]: Starting systemd-journald.service... Feb 9 09:56:27.028128 kernel: loop: module loaded Feb 9 09:56:27.028137 systemd[1]: Starting systemd-modules-load.service... Feb 9 09:56:27.028147 systemd[1]: Starting systemd-network-generator.service... Feb 9 09:56:27.028158 systemd[1]: Starting systemd-remount-fs.service... Feb 9 09:56:27.028171 systemd-journald[1217]: Journal started Feb 9 09:56:27.028214 systemd-journald[1217]: Runtime Journal (/run/log/journal/11816a05b6854a75b7ae44a54184b855) is 8.0M, max 78.6M, 70.6M free. Feb 9 09:56:27.024000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Feb 9 09:56:27.024000 audit[1217]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=60 a0=6 a1=fffff2911020 a2=4000 a3=1 items=0 ppid=1 pid=1217 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:56:27.024000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Feb 9 09:56:27.047967 systemd[1]: Starting systemd-udev-trigger.service... Feb 9 09:56:27.048024 systemd[1]: Started systemd-journald.service. Feb 9 09:56:27.055000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:56:27.057078 systemd[1]: Mounted dev-hugepages.mount. Feb 9 09:56:27.061984 systemd[1]: Mounted dev-mqueue.mount. Feb 9 09:56:27.066312 systemd[1]: Mounted media.mount. Feb 9 09:56:27.070150 systemd[1]: Mounted sys-kernel-debug.mount. Feb 9 09:56:27.074988 systemd[1]: Mounted sys-kernel-tracing.mount. Feb 9 09:56:27.080417 systemd[1]: Mounted tmp.mount. Feb 9 09:56:27.084737 systemd[1]: Finished flatcar-tmpfiles.service. Feb 9 09:56:27.089000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:56:27.090401 systemd[1]: Finished kmod-static-nodes.service. Feb 9 09:56:27.095000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:56:27.096099 systemd[1]: modprobe@configfs.service: Deactivated successfully. Feb 9 09:56:27.096295 systemd[1]: Finished modprobe@configfs.service. Feb 9 09:56:27.101000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:56:27.101000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:56:27.101670 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 9 09:56:27.101901 systemd[1]: Finished modprobe@dm_mod.service. Feb 9 09:56:27.106000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:56:27.106000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:56:27.107311 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 9 09:56:27.107458 systemd[1]: Finished modprobe@drm.service. Feb 9 09:56:27.112000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:56:27.112000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:56:27.112717 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 9 09:56:27.112866 systemd[1]: Finished modprobe@efi_pstore.service. Feb 9 09:56:27.117000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:56:27.117000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:56:27.118669 systemd[1]: modprobe@fuse.service: Deactivated successfully. Feb 9 09:56:27.118817 systemd[1]: Finished modprobe@fuse.service. Feb 9 09:56:27.123000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:56:27.123000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:56:27.124286 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 9 09:56:27.124456 systemd[1]: Finished modprobe@loop.service. Feb 9 09:56:27.129000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:56:27.129000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:56:27.129957 systemd[1]: Finished systemd-modules-load.service. Feb 9 09:56:27.134000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:56:27.135615 systemd[1]: Finished systemd-network-generator.service. Feb 9 09:56:27.140000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:56:27.141813 systemd[1]: Finished systemd-remount-fs.service. Feb 9 09:56:27.146000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:56:27.147565 systemd[1]: Finished systemd-udev-trigger.service. Feb 9 09:56:27.152000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:56:27.153332 systemd[1]: Reached target network-pre.target. Feb 9 09:56:27.159888 systemd[1]: Mounting sys-fs-fuse-connections.mount... Feb 9 09:56:27.165862 systemd[1]: Mounting sys-kernel-config.mount... Feb 9 09:56:27.170652 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Feb 9 09:56:27.175366 systemd[1]: Starting systemd-hwdb-update.service... Feb 9 09:56:27.181330 systemd[1]: Starting systemd-journal-flush.service... Feb 9 09:56:27.186420 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 9 09:56:27.187607 systemd[1]: Starting systemd-random-seed.service... Feb 9 09:56:27.192706 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Feb 9 09:56:27.193859 systemd[1]: Starting systemd-sysctl.service... Feb 9 09:56:27.199916 systemd[1]: Starting systemd-sysusers.service... Feb 9 09:56:27.202967 systemd-journald[1217]: Time spent on flushing to /var/log/journal/11816a05b6854a75b7ae44a54184b855 is 19.947ms for 1040 entries. Feb 9 09:56:27.202967 systemd-journald[1217]: System Journal (/var/log/journal/11816a05b6854a75b7ae44a54184b855) is 8.0M, max 2.6G, 2.6G free. Feb 9 09:56:27.269040 systemd-journald[1217]: Received client request to flush runtime journal. Feb 9 09:56:27.269082 kernel: kauditd_printk_skb: 57 callbacks suppressed Feb 9 09:56:27.269098 kernel: audit: type=1130 audit(1707472587.256:111): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:56:27.256000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:56:27.214619 systemd[1]: Starting systemd-udev-settle.service... Feb 9 09:56:27.224325 systemd[1]: Mounted sys-fs-fuse-connections.mount. Feb 9 09:56:27.232368 systemd[1]: Mounted sys-kernel-config.mount. Feb 9 09:56:27.247053 systemd[1]: Finished systemd-random-seed.service. Feb 9 09:56:27.282906 systemd[1]: Finished systemd-sysctl.service. Feb 9 09:56:27.289169 systemd[1]: Finished systemd-journal-flush.service. Feb 9 09:56:27.296298 udevadm[1241]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Feb 9 09:56:27.296206 systemd[1]: Reached target first-boot-complete.target. Feb 9 09:56:27.288000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:56:27.321464 kernel: audit: type=1130 audit(1707472587.288:112): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:56:27.322149 kernel: audit: type=1130 audit(1707472587.294:113): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:56:27.294000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:56:27.430975 systemd[1]: Finished systemd-sysusers.service. Feb 9 09:56:27.435000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:56:27.440951 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Feb 9 09:56:27.461102 kernel: audit: type=1130 audit(1707472587.435:114): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:56:27.540862 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Feb 9 09:56:27.546000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:56:27.571620 kernel: audit: type=1130 audit(1707472587.546:115): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:56:27.781683 systemd[1]: Finished systemd-hwdb-update.service. Feb 9 09:56:27.786000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:56:27.788539 systemd[1]: Starting systemd-udevd.service... Feb 9 09:56:27.811825 kernel: audit: type=1130 audit(1707472587.786:116): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:56:27.826945 systemd-udevd[1252]: Using default interface naming scheme 'v252'. Feb 9 09:56:27.894907 systemd[1]: Started systemd-udevd.service. Feb 9 09:56:27.903000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:56:27.928722 systemd[1]: Starting systemd-networkd.service... Feb 9 09:56:27.935556 kernel: audit: type=1130 audit(1707472587.903:117): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:56:27.950636 systemd[1]: Starting systemd-userdbd.service... Feb 9 09:56:27.958939 systemd[1]: Found device dev-ttyAMA0.device. Feb 9 09:56:28.000564 kernel: mousedev: PS/2 mouse device common for all mice Feb 9 09:56:28.014625 systemd[1]: Started systemd-userdbd.service. Feb 9 09:56:28.019000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:56:28.042000 audit[1262]: AVC avc: denied { confidentiality } for pid=1262 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Feb 9 09:56:28.072043 kernel: audit: type=1130 audit(1707472588.019:118): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:56:28.072129 kernel: audit: type=1400 audit(1707472588.042:119): avc: denied { confidentiality } for pid=1262 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Feb 9 09:56:28.072177 kernel: hv_vmbus: registering driver hv_balloon Feb 9 09:56:28.089672 kernel: hv_vmbus: registering driver hyperv_fb Feb 9 09:56:28.101210 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Feb 9 09:56:28.101340 kernel: hv_balloon: Memory hot add disabled on ARM64 Feb 9 09:56:28.042000 audit[1262]: SYSCALL arch=c00000b7 syscall=105 success=yes exit=0 a0=aaaadcc89f00 a1=aa2c a2=ffff837724b0 a3=aaaadc9e7010 items=12 ppid=1252 pid=1262 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:56:28.042000 audit: CWD cwd="/" Feb 9 09:56:28.042000 audit: PATH item=0 name=(null) inode=7316 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 09:56:28.042000 audit: PATH item=1 name=(null) inode=11562 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 09:56:28.042000 audit: PATH item=2 name=(null) inode=11562 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 09:56:28.042000 audit: PATH item=3 name=(null) inode=11563 dev=00:0a mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 09:56:28.042000 audit: PATH item=4 name=(null) inode=11562 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 09:56:28.042000 audit: PATH item=5 name=(null) inode=11564 dev=00:0a mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 09:56:28.042000 audit: PATH item=6 name=(null) inode=11562 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 09:56:28.042000 audit: PATH item=7 name=(null) inode=11565 dev=00:0a mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 09:56:28.042000 audit: PATH item=8 name=(null) inode=11562 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 09:56:28.042000 audit: PATH item=9 name=(null) inode=11566 dev=00:0a mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 09:56:28.042000 audit: PATH item=10 name=(null) inode=11562 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 09:56:28.042000 audit: PATH item=11 name=(null) inode=11567 dev=00:0a mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 09:56:28.042000 audit: PROCTITLE proctitle="(udev-worker)" Feb 9 09:56:28.136556 kernel: audit: type=1300 audit(1707472588.042:119): arch=c00000b7 syscall=105 success=yes exit=0 a0=aaaadcc89f00 a1=aa2c a2=ffff837724b0 a3=aaaadc9e7010 items=12 ppid=1252 pid=1262 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:56:28.143867 kernel: hv_utils: Registering HyperV Utility Driver Feb 9 09:56:28.150546 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Feb 9 09:56:28.170116 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Feb 9 09:56:28.170191 kernel: hv_vmbus: registering driver hv_utils Feb 9 09:56:28.170207 kernel: Console: switching to colour dummy device 80x25 Feb 9 09:56:28.176628 kernel: Console: switching to colour frame buffer device 128x48 Feb 9 09:56:28.192770 kernel: hv_utils: Heartbeat IC version 3.0 Feb 9 09:56:28.192867 kernel: hv_utils: Shutdown IC version 3.2 Feb 9 09:56:28.192908 kernel: hv_utils: TimeSync IC version 4.0 Feb 9 09:56:28.086556 kernel: BTRFS info: devid 1 device path /dev/disk/by-label/OEM changed to /dev/sda6 scanned by (udev-worker) (1256) Feb 9 09:56:28.151267 systemd-journald[1217]: Time jumped backwards, rotating. Feb 9 09:56:28.124000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:56:28.139000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:56:28.111713 systemd[1]: dev-disk-by\x2dlabel-OEM.device was skipped because of an unmet condition check (ConditionPathExists=!/usr/.noupdate). Feb 9 09:56:28.120294 systemd[1]: Finished systemd-udev-settle.service. Feb 9 09:56:28.131235 systemd[1]: Starting lvm2-activation-early.service... Feb 9 09:56:28.132236 systemd-networkd[1273]: lo: Link UP Feb 9 09:56:28.132240 systemd-networkd[1273]: lo: Gained carrier Feb 9 09:56:28.132653 systemd-networkd[1273]: Enumeration completed Feb 9 09:56:28.136018 systemd[1]: Started systemd-networkd.service. Feb 9 09:56:28.140771 systemd-networkd[1273]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 9 09:56:28.142059 systemd[1]: Starting systemd-networkd-wait-online.service... Feb 9 09:56:28.194329 kernel: mlx5_core c787:00:02.0 enP51079s1: Link up Feb 9 09:56:28.206108 lvm[1331]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 9 09:56:28.222349 kernel: hv_netvsc 002248bb-d916-0022-48bb-d916002248bb eth0: Data path switched to VF: enP51079s1 Feb 9 09:56:28.223988 systemd-networkd[1273]: enP51079s1: Link UP Feb 9 09:56:28.224352 systemd-networkd[1273]: eth0: Link UP Feb 9 09:56:28.224361 systemd-networkd[1273]: eth0: Gained carrier Feb 9 09:56:28.228835 systemd-networkd[1273]: enP51079s1: Gained carrier Feb 9 09:56:28.231407 systemd[1]: Finished lvm2-activation-early.service. Feb 9 09:56:28.235000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:56:28.236750 systemd[1]: Reached target cryptsetup.target. Feb 9 09:56:28.241443 systemd-networkd[1273]: eth0: DHCPv4 address 10.200.20.12/24, gateway 10.200.20.1 acquired from 168.63.129.16 Feb 9 09:56:28.242614 systemd[1]: Starting lvm2-activation.service... Feb 9 09:56:28.247230 lvm[1335]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 9 09:56:28.265410 systemd[1]: Finished lvm2-activation.service. Feb 9 09:56:28.269000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:56:28.270500 systemd[1]: Reached target local-fs-pre.target. Feb 9 09:56:28.275445 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Feb 9 09:56:28.275478 systemd[1]: Reached target local-fs.target. Feb 9 09:56:28.279848 systemd[1]: Reached target machines.target. Feb 9 09:56:28.285596 systemd[1]: Starting ldconfig.service... Feb 9 09:56:28.290199 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Feb 9 09:56:28.290296 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 9 09:56:28.291599 systemd[1]: Starting systemd-boot-update.service... Feb 9 09:56:28.297019 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Feb 9 09:56:28.303747 systemd[1]: Starting systemd-machine-id-commit.service... Feb 9 09:56:28.308878 systemd[1]: systemd-sysext.service was skipped because no trigger condition checks were met. Feb 9 09:56:28.308940 systemd[1]: ensure-sysext.service was skipped because no trigger condition checks were met. Feb 9 09:56:28.310114 systemd[1]: Starting systemd-tmpfiles-setup.service... Feb 9 09:56:28.318256 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1338 (bootctl) Feb 9 09:56:28.319551 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Feb 9 09:56:28.331989 systemd-tmpfiles[1341]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Feb 9 09:56:28.338392 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Feb 9 09:56:28.343000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:56:28.513140 systemd-tmpfiles[1341]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Feb 9 09:56:28.560957 systemd-tmpfiles[1341]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Feb 9 09:56:29.483906 systemd-fsck[1347]: fsck.fat 4.2 (2021-01-31) Feb 9 09:56:29.483906 systemd-fsck[1347]: /dev/sda1: 236 files, 113719/258078 clusters Feb 9 09:56:29.493000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:56:29.570000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd-wait-online comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:56:29.486659 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Feb 9 09:56:29.496090 systemd[1]: Mounting boot.mount... Feb 9 09:56:29.561455 systemd-networkd[1273]: eth0: Gained IPv6LL Feb 9 09:56:29.564372 systemd[1]: Finished systemd-networkd-wait-online.service. Feb 9 09:56:29.866166 systemd[1]: Mounted boot.mount. Feb 9 09:56:29.888000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:56:29.883178 systemd[1]: Finished systemd-boot-update.service. Feb 9 09:56:29.973534 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Feb 9 09:56:29.974164 systemd[1]: Finished systemd-machine-id-commit.service. Feb 9 09:56:29.979000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:56:30.051391 systemd[1]: Finished systemd-tmpfiles-setup.service. Feb 9 09:56:30.055000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:56:30.058736 systemd[1]: Starting audit-rules.service... Feb 9 09:56:30.067742 systemd[1]: Starting clean-ca-certificates.service... Feb 9 09:56:30.075020 systemd[1]: Starting systemd-journal-catalog-update.service... Feb 9 09:56:30.085623 systemd[1]: Starting systemd-resolved.service... Feb 9 09:56:30.093374 systemd[1]: Starting systemd-timesyncd.service... Feb 9 09:56:30.100191 systemd[1]: Starting systemd-update-utmp.service... Feb 9 09:56:30.107135 systemd[1]: Finished clean-ca-certificates.service. Feb 9 09:56:30.112000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:56:30.114248 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Feb 9 09:56:30.145000 audit[1371]: SYSTEM_BOOT pid=1371 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Feb 9 09:56:30.148513 systemd[1]: Finished systemd-update-utmp.service. Feb 9 09:56:30.153000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:56:30.168929 systemd[1]: Finished systemd-journal-catalog-update.service. Feb 9 09:56:30.176000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:56:30.197000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Feb 9 09:56:30.197000 audit[1382]: SYSCALL arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffce967140 a2=420 a3=0 items=0 ppid=1359 pid=1382 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:56:30.197000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Feb 9 09:56:30.199691 augenrules[1382]: No rules Feb 9 09:56:30.200160 systemd[1]: Finished audit-rules.service. Feb 9 09:56:30.206017 systemd[1]: Started systemd-timesyncd.service. Feb 9 09:56:30.212624 systemd[1]: Reached target time-set.target. Feb 9 09:56:30.219461 systemd-resolved[1369]: Positive Trust Anchors: Feb 9 09:56:30.219486 systemd-resolved[1369]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 9 09:56:30.219525 systemd-resolved[1369]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Feb 9 09:56:30.240821 systemd-resolved[1369]: Using system hostname 'ci-3510.3.2-a-d10cdd880c'. Feb 9 09:56:30.242598 systemd[1]: Started systemd-resolved.service. Feb 9 09:56:30.248510 systemd[1]: Reached target network.target. Feb 9 09:56:30.253741 systemd[1]: Reached target network-online.target. Feb 9 09:56:30.259235 systemd[1]: Reached target nss-lookup.target. Feb 9 09:56:30.373882 systemd-timesyncd[1370]: Contacted time server 23.157.160.168:123 (0.flatcar.pool.ntp.org). Feb 9 09:56:30.374435 systemd-timesyncd[1370]: Initial clock synchronization to Fri 2024-02-09 09:56:30.372898 UTC. Feb 9 09:56:31.663892 ldconfig[1337]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Feb 9 09:56:31.675070 systemd[1]: Finished ldconfig.service. Feb 9 09:56:31.681448 systemd[1]: Starting systemd-update-done.service... Feb 9 09:56:31.699169 systemd[1]: Finished systemd-update-done.service. Feb 9 09:56:31.705458 systemd[1]: Reached target sysinit.target. Feb 9 09:56:31.710518 systemd[1]: Started motdgen.path. Feb 9 09:56:31.714649 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Feb 9 09:56:31.724482 systemd[1]: Started logrotate.timer. Feb 9 09:56:31.729339 systemd[1]: Started mdadm.timer. Feb 9 09:56:31.733929 systemd[1]: Started systemd-tmpfiles-clean.timer. Feb 9 09:56:31.739933 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Feb 9 09:56:31.739966 systemd[1]: Reached target paths.target. Feb 9 09:56:31.744999 systemd[1]: Reached target timers.target. Feb 9 09:56:31.751376 systemd[1]: Listening on dbus.socket. Feb 9 09:56:31.758272 systemd[1]: Starting docker.socket... Feb 9 09:56:31.764220 systemd[1]: Listening on sshd.socket. Feb 9 09:56:31.769594 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 9 09:56:31.770011 systemd[1]: Listening on docker.socket. Feb 9 09:56:31.775065 systemd[1]: Reached target sockets.target. Feb 9 09:56:31.780549 systemd[1]: Reached target basic.target. Feb 9 09:56:31.785690 systemd[1]: System is tainted: cgroupsv1 Feb 9 09:56:31.785737 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Feb 9 09:56:31.785759 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Feb 9 09:56:31.786937 systemd[1]: Starting containerd.service... Feb 9 09:56:31.792615 systemd[1]: Starting dbus.service... Feb 9 09:56:31.798234 systemd[1]: Starting enable-oem-cloudinit.service... Feb 9 09:56:31.804944 systemd[1]: Starting extend-filesystems.service... Feb 9 09:56:31.812220 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Feb 9 09:56:31.813526 systemd[1]: Starting motdgen.service... Feb 9 09:56:31.819807 systemd[1]: Started nvidia.service. Feb 9 09:56:31.826113 systemd[1]: Starting prepare-cni-plugins.service... Feb 9 09:56:31.829172 jq[1397]: false Feb 9 09:56:31.833049 systemd[1]: Starting prepare-critools.service... Feb 9 09:56:31.844490 systemd[1]: Starting prepare-helm.service... Feb 9 09:56:31.853532 extend-filesystems[1398]: Found sda Feb 9 09:56:31.866199 extend-filesystems[1398]: Found sda1 Feb 9 09:56:31.866199 extend-filesystems[1398]: Found sda2 Feb 9 09:56:31.866199 extend-filesystems[1398]: Found sda3 Feb 9 09:56:31.866199 extend-filesystems[1398]: Found usr Feb 9 09:56:31.866199 extend-filesystems[1398]: Found sda4 Feb 9 09:56:31.866199 extend-filesystems[1398]: Found sda6 Feb 9 09:56:31.866199 extend-filesystems[1398]: Found sda7 Feb 9 09:56:31.866199 extend-filesystems[1398]: Found sda9 Feb 9 09:56:31.866199 extend-filesystems[1398]: Checking size of /dev/sda9 Feb 9 09:56:31.854788 systemd[1]: Starting ssh-key-proc-cmdline.service... Feb 9 09:56:32.099107 extend-filesystems[1398]: Old size kept for /dev/sda9 Feb 9 09:56:32.099107 extend-filesystems[1398]: Found sr0 Feb 9 09:56:31.924563 dbus-daemon[1396]: [system] SELinux support is enabled Feb 9 09:56:31.873250 systemd[1]: Starting sshd-keygen.service... Feb 9 09:56:31.895249 systemd[1]: Starting systemd-logind.service... Feb 9 09:56:31.905415 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 9 09:56:31.905480 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Feb 9 09:56:32.135841 jq[1434]: true Feb 9 09:56:31.907021 systemd[1]: Starting update-engine.service... Feb 9 09:56:31.915601 systemd[1]: Starting update-ssh-keys-after-ignition.service... Feb 9 09:56:31.934713 systemd[1]: Started dbus.service. Feb 9 09:56:32.136476 tar[1445]: crictl Feb 9 09:56:31.958278 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Feb 9 09:56:32.136801 tar[1446]: linux-arm64/helm Feb 9 09:56:31.958553 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Feb 9 09:56:32.137904 tar[1444]: ./ Feb 9 09:56:32.137904 tar[1444]: ./macvlan Feb 9 09:56:31.958846 systemd[1]: extend-filesystems.service: Deactivated successfully. Feb 9 09:56:31.959050 systemd[1]: Finished extend-filesystems.service. Feb 9 09:56:32.140964 jq[1447]: true Feb 9 09:56:31.983799 systemd[1]: motdgen.service: Deactivated successfully. Feb 9 09:56:31.984024 systemd[1]: Finished motdgen.service. Feb 9 09:56:31.995398 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Feb 9 09:56:31.995652 systemd[1]: Finished ssh-key-proc-cmdline.service. Feb 9 09:56:32.007738 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Feb 9 09:56:32.007772 systemd[1]: Reached target system-config.target. Feb 9 09:56:32.017408 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Feb 9 09:56:32.017429 systemd[1]: Reached target user-config.target. Feb 9 09:56:32.053645 systemd-logind[1429]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Feb 9 09:56:32.056842 systemd-logind[1429]: New seat seat0. Feb 9 09:56:32.065842 systemd[1]: Started systemd-logind.service. Feb 9 09:56:32.116340 systemd[1]: nvidia.service: Deactivated successfully. Feb 9 09:56:32.158543 env[1448]: time="2024-02-09T09:56:32.158477102Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Feb 9 09:56:32.175143 bash[1478]: Updated "/home/core/.ssh/authorized_keys" Feb 9 09:56:32.176076 systemd[1]: Finished update-ssh-keys-after-ignition.service. Feb 9 09:56:32.202002 update_engine[1431]: I0209 09:56:32.198726 1431 main.cc:92] Flatcar Update Engine starting Feb 9 09:56:32.215877 systemd[1]: Started update-engine.service. Feb 9 09:56:32.223968 update_engine[1431]: I0209 09:56:32.223642 1431 update_check_scheduler.cc:74] Next update check in 2m28s Feb 9 09:56:32.224723 env[1448]: time="2024-02-09T09:56:32.224672848Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Feb 9 09:56:32.224869 env[1448]: time="2024-02-09T09:56:32.224844080Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Feb 9 09:56:32.232267 env[1448]: time="2024-02-09T09:56:32.226272048Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.148-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Feb 9 09:56:32.232267 env[1448]: time="2024-02-09T09:56:32.226332285Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Feb 9 09:56:32.232267 env[1448]: time="2024-02-09T09:56:32.226590392Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 9 09:56:32.232267 env[1448]: time="2024-02-09T09:56:32.226610311Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Feb 9 09:56:32.232267 env[1448]: time="2024-02-09T09:56:32.226624590Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Feb 9 09:56:32.232267 env[1448]: time="2024-02-09T09:56:32.226634990Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Feb 9 09:56:32.232267 env[1448]: time="2024-02-09T09:56:32.226716386Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Feb 9 09:56:32.232267 env[1448]: time="2024-02-09T09:56:32.229554443Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Feb 9 09:56:32.232267 env[1448]: time="2024-02-09T09:56:32.229740273Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 9 09:56:32.232267 env[1448]: time="2024-02-09T09:56:32.229757912Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Feb 9 09:56:32.232550 tar[1444]: ./static Feb 9 09:56:32.226828 systemd[1]: Started locksmithd.service. Feb 9 09:56:32.232630 env[1448]: time="2024-02-09T09:56:32.229813990Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Feb 9 09:56:32.232630 env[1448]: time="2024-02-09T09:56:32.229826629Z" level=info msg="metadata content store policy set" policy=shared Feb 9 09:56:32.249280 env[1448]: time="2024-02-09T09:56:32.249222972Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Feb 9 09:56:32.249280 env[1448]: time="2024-02-09T09:56:32.249278729Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Feb 9 09:56:32.249443 env[1448]: time="2024-02-09T09:56:32.249292848Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Feb 9 09:56:32.249443 env[1448]: time="2024-02-09T09:56:32.249347566Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Feb 9 09:56:32.249443 env[1448]: time="2024-02-09T09:56:32.249367285Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Feb 9 09:56:32.249443 env[1448]: time="2024-02-09T09:56:32.249385044Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Feb 9 09:56:32.249525 env[1448]: time="2024-02-09T09:56:32.249450361Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Feb 9 09:56:32.249854 env[1448]: time="2024-02-09T09:56:32.249828581Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Feb 9 09:56:32.249900 env[1448]: time="2024-02-09T09:56:32.249855100Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Feb 9 09:56:32.249900 env[1448]: time="2024-02-09T09:56:32.249870179Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Feb 9 09:56:32.249900 env[1448]: time="2024-02-09T09:56:32.249883899Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Feb 9 09:56:32.249900 env[1448]: time="2024-02-09T09:56:32.249898418Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Feb 9 09:56:32.250068 env[1448]: time="2024-02-09T09:56:32.250044731Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Feb 9 09:56:32.250157 env[1448]: time="2024-02-09T09:56:32.250132566Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Feb 9 09:56:32.250538 env[1448]: time="2024-02-09T09:56:32.250513307Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Feb 9 09:56:32.250597 env[1448]: time="2024-02-09T09:56:32.250547665Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Feb 9 09:56:32.250597 env[1448]: time="2024-02-09T09:56:32.250563744Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Feb 9 09:56:32.250646 env[1448]: time="2024-02-09T09:56:32.250607422Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Feb 9 09:56:32.250646 env[1448]: time="2024-02-09T09:56:32.250620182Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Feb 9 09:56:32.250646 env[1448]: time="2024-02-09T09:56:32.250631581Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Feb 9 09:56:32.250646 env[1448]: time="2024-02-09T09:56:32.250642461Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Feb 9 09:56:32.250725 env[1448]: time="2024-02-09T09:56:32.250655900Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Feb 9 09:56:32.250725 env[1448]: time="2024-02-09T09:56:32.250668099Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Feb 9 09:56:32.250725 env[1448]: time="2024-02-09T09:56:32.250682898Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Feb 9 09:56:32.250725 env[1448]: time="2024-02-09T09:56:32.250694338Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Feb 9 09:56:32.250725 env[1448]: time="2024-02-09T09:56:32.250706777Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Feb 9 09:56:32.250894 env[1448]: time="2024-02-09T09:56:32.250828531Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Feb 9 09:56:32.250894 env[1448]: time="2024-02-09T09:56:32.250854090Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Feb 9 09:56:32.250894 env[1448]: time="2024-02-09T09:56:32.250866969Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Feb 9 09:56:32.250894 env[1448]: time="2024-02-09T09:56:32.250878849Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Feb 9 09:56:32.250998 env[1448]: time="2024-02-09T09:56:32.250894368Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Feb 9 09:56:32.250998 env[1448]: time="2024-02-09T09:56:32.250905007Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Feb 9 09:56:32.250998 env[1448]: time="2024-02-09T09:56:32.250922926Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Feb 9 09:56:32.250998 env[1448]: time="2024-02-09T09:56:32.250956045Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Feb 9 09:56:32.251209 env[1448]: time="2024-02-09T09:56:32.251154835Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Feb 9 09:56:32.257799 env[1448]: time="2024-02-09T09:56:32.251214312Z" level=info msg="Connect containerd service" Feb 9 09:56:32.257799 env[1448]: time="2024-02-09T09:56:32.251248190Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Feb 9 09:56:32.257799 env[1448]: time="2024-02-09T09:56:32.251751965Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 9 09:56:32.257799 env[1448]: time="2024-02-09T09:56:32.251981473Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Feb 9 09:56:32.257799 env[1448]: time="2024-02-09T09:56:32.252017471Z" level=info msg=serving... address=/run/containerd/containerd.sock Feb 9 09:56:32.257799 env[1448]: time="2024-02-09T09:56:32.252066589Z" level=info msg="containerd successfully booted in 0.111964s" Feb 9 09:56:32.257799 env[1448]: time="2024-02-09T09:56:32.257217649Z" level=info msg="Start subscribing containerd event" Feb 9 09:56:32.257799 env[1448]: time="2024-02-09T09:56:32.257360522Z" level=info msg="Start recovering state" Feb 9 09:56:32.252169 systemd[1]: Started containerd.service. Feb 9 09:56:32.261275 env[1448]: time="2024-02-09T09:56:32.261220448Z" level=info msg="Start event monitor" Feb 9 09:56:32.261275 env[1448]: time="2024-02-09T09:56:32.261260686Z" level=info msg="Start snapshots syncer" Feb 9 09:56:32.261275 env[1448]: time="2024-02-09T09:56:32.261272245Z" level=info msg="Start cni network conf syncer for default" Feb 9 09:56:32.261392 env[1448]: time="2024-02-09T09:56:32.261282285Z" level=info msg="Start streaming server" Feb 9 09:56:32.321001 tar[1444]: ./vlan Feb 9 09:56:32.391722 tar[1444]: ./portmap Feb 9 09:56:32.462158 tar[1444]: ./host-local Feb 9 09:56:32.527416 tar[1444]: ./vrf Feb 9 09:56:32.593157 tar[1444]: ./bridge Feb 9 09:56:32.673031 tar[1444]: ./tuning Feb 9 09:56:32.740351 tar[1444]: ./firewall Feb 9 09:56:32.819601 tar[1444]: ./host-device Feb 9 09:56:32.881536 tar[1446]: linux-arm64/LICENSE Feb 9 09:56:32.881536 tar[1446]: linux-arm64/README.md Feb 9 09:56:32.893410 tar[1444]: ./sbr Feb 9 09:56:32.904720 systemd[1]: Finished prepare-helm.service. Feb 9 09:56:32.935457 systemd[1]: Finished prepare-critools.service. Feb 9 09:56:32.956071 tar[1444]: ./loopback Feb 9 09:56:32.983188 tar[1444]: ./dhcp Feb 9 09:56:33.058328 tar[1444]: ./ptp Feb 9 09:56:33.090799 tar[1444]: ./ipvlan Feb 9 09:56:33.122703 tar[1444]: ./bandwidth Feb 9 09:56:33.155473 locksmithd[1502]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Feb 9 09:56:33.173592 systemd[1]: Finished prepare-cni-plugins.service. Feb 9 09:56:34.131409 sshd_keygen[1430]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Feb 9 09:56:34.147600 systemd[1]: Finished sshd-keygen.service. Feb 9 09:56:34.154219 systemd[1]: Starting issuegen.service... Feb 9 09:56:34.159385 systemd[1]: Started waagent.service. Feb 9 09:56:34.165015 systemd[1]: issuegen.service: Deactivated successfully. Feb 9 09:56:34.165255 systemd[1]: Finished issuegen.service. Feb 9 09:56:34.171440 systemd[1]: Starting systemd-user-sessions.service... Feb 9 09:56:34.187148 systemd[1]: Finished systemd-user-sessions.service. Feb 9 09:56:34.194665 systemd[1]: Started getty@tty1.service. Feb 9 09:56:34.202787 systemd[1]: Started serial-getty@ttyAMA0.service. Feb 9 09:56:34.208283 systemd[1]: Reached target getty.target. Feb 9 09:56:34.213071 systemd[1]: Reached target multi-user.target. Feb 9 09:56:34.219570 systemd[1]: Starting systemd-update-utmp-runlevel.service... Feb 9 09:56:34.232350 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Feb 9 09:56:34.232589 systemd[1]: Finished systemd-update-utmp-runlevel.service. Feb 9 09:56:34.239010 systemd[1]: Startup finished in 11.740s (kernel) + 10.760s (userspace) = 22.500s. Feb 9 09:56:34.400752 login[1536]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Feb 9 09:56:34.406160 login[1537]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Feb 9 09:56:34.423224 systemd[1]: Created slice user-500.slice. Feb 9 09:56:34.424292 systemd[1]: Starting user-runtime-dir@500.service... Feb 9 09:56:34.426990 systemd-logind[1429]: New session 1 of user core. Feb 9 09:56:34.430174 systemd-logind[1429]: New session 2 of user core. Feb 9 09:56:34.440727 systemd[1]: Finished user-runtime-dir@500.service. Feb 9 09:56:34.442121 systemd[1]: Starting user@500.service... Feb 9 09:56:34.454805 (systemd)[1543]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:56:34.693671 systemd[1543]: Queued start job for default target default.target. Feb 9 09:56:34.693918 systemd[1543]: Reached target paths.target. Feb 9 09:56:34.693933 systemd[1543]: Reached target sockets.target. Feb 9 09:56:34.693944 systemd[1543]: Reached target timers.target. Feb 9 09:56:34.693954 systemd[1543]: Reached target basic.target. Feb 9 09:56:34.694000 systemd[1543]: Reached target default.target. Feb 9 09:56:34.694021 systemd[1543]: Startup finished in 233ms. Feb 9 09:56:34.694078 systemd[1]: Started user@500.service. Feb 9 09:56:34.695026 systemd[1]: Started session-1.scope. Feb 9 09:56:34.695577 systemd[1]: Started session-2.scope. Feb 9 09:56:36.037853 waagent[1532]: 2024-02-09T09:56:36.037742Z INFO Daemon Daemon Azure Linux Agent Version:2.6.0.2 Feb 9 09:56:36.045665 waagent[1532]: 2024-02-09T09:56:36.045574Z INFO Daemon Daemon OS: flatcar 3510.3.2 Feb 9 09:56:36.050890 waagent[1532]: 2024-02-09T09:56:36.050816Z INFO Daemon Daemon Python: 3.9.16 Feb 9 09:56:36.057464 waagent[1532]: 2024-02-09T09:56:36.057366Z INFO Daemon Daemon Run daemon Feb 9 09:56:36.063582 waagent[1532]: 2024-02-09T09:56:36.063505Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='3510.3.2' Feb 9 09:56:36.084272 waagent[1532]: 2024-02-09T09:56:36.084110Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 1. Feb 9 09:56:36.102263 waagent[1532]: 2024-02-09T09:56:36.102112Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Feb 9 09:56:36.114223 waagent[1532]: 2024-02-09T09:56:36.114140Z INFO Daemon Daemon cloud-init is enabled: False Feb 9 09:56:36.121959 waagent[1532]: 2024-02-09T09:56:36.121875Z INFO Daemon Daemon Using waagent for provisioning Feb 9 09:56:36.129157 waagent[1532]: 2024-02-09T09:56:36.129080Z INFO Daemon Daemon Activate resource disk Feb 9 09:56:36.136541 waagent[1532]: 2024-02-09T09:56:36.136454Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Feb 9 09:56:36.161536 waagent[1532]: 2024-02-09T09:56:36.161452Z INFO Daemon Daemon Found device: None Feb 9 09:56:36.167533 waagent[1532]: 2024-02-09T09:56:36.167448Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Feb 9 09:56:36.177321 waagent[1532]: 2024-02-09T09:56:36.177233Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Feb 9 09:56:36.192237 waagent[1532]: 2024-02-09T09:56:36.192164Z INFO Daemon Daemon Clean protocol and wireserver endpoint Feb 9 09:56:36.199141 waagent[1532]: 2024-02-09T09:56:36.199066Z INFO Daemon Daemon Running default provisioning handler Feb 9 09:56:36.212672 waagent[1532]: 2024-02-09T09:56:36.212519Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 1. Feb 9 09:56:36.231456 waagent[1532]: 2024-02-09T09:56:36.231285Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Feb 9 09:56:36.242454 waagent[1532]: 2024-02-09T09:56:36.242372Z INFO Daemon Daemon cloud-init is enabled: False Feb 9 09:56:36.248476 waagent[1532]: 2024-02-09T09:56:36.248406Z INFO Daemon Daemon Copying ovf-env.xml Feb 9 09:56:36.291941 waagent[1532]: 2024-02-09T09:56:36.291745Z INFO Daemon Daemon Successfully mounted dvd Feb 9 09:56:36.319505 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Feb 9 09:56:36.336867 waagent[1532]: 2024-02-09T09:56:36.336718Z INFO Daemon Daemon Detect protocol endpoint Feb 9 09:56:36.343261 waagent[1532]: 2024-02-09T09:56:36.343172Z INFO Daemon Daemon Clean protocol and wireserver endpoint Feb 9 09:56:36.350097 waagent[1532]: 2024-02-09T09:56:36.350013Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Feb 9 09:56:36.358767 waagent[1532]: 2024-02-09T09:56:36.358680Z INFO Daemon Daemon Test for route to 168.63.129.16 Feb 9 09:56:36.365181 waagent[1532]: 2024-02-09T09:56:36.365102Z INFO Daemon Daemon Route to 168.63.129.16 exists Feb 9 09:56:36.371337 waagent[1532]: 2024-02-09T09:56:36.371248Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Feb 9 09:56:36.412341 waagent[1532]: 2024-02-09T09:56:36.412258Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Feb 9 09:56:36.423391 waagent[1532]: 2024-02-09T09:56:36.423335Z INFO Daemon Daemon Wire protocol version:2012-11-30 Feb 9 09:56:36.430664 waagent[1532]: 2024-02-09T09:56:36.430584Z INFO Daemon Daemon Server preferred version:2015-04-05 Feb 9 09:56:37.297565 waagent[1532]: 2024-02-09T09:56:37.297420Z INFO Daemon Daemon Initializing goal state during protocol detection Feb 9 09:56:37.314993 waagent[1532]: 2024-02-09T09:56:37.314917Z INFO Daemon Daemon Forcing an update of the goal state.. Feb 9 09:56:37.322289 waagent[1532]: 2024-02-09T09:56:37.322213Z INFO Daemon Daemon Fetching goal state [incarnation 1] Feb 9 09:56:37.406782 waagent[1532]: 2024-02-09T09:56:37.406640Z INFO Daemon Daemon Found private key matching thumbprint 4CEC6D1E1804177751CDFCBAB23B501FE8D32F34 Feb 9 09:56:37.419850 waagent[1532]: 2024-02-09T09:56:37.419757Z INFO Daemon Daemon Certificate with thumbprint EC09F9756EAC6C27A60BFD87E863BBBCA0CFB4B8 has no matching private key. Feb 9 09:56:37.430766 waagent[1532]: 2024-02-09T09:56:37.430673Z INFO Daemon Daemon Fetch goal state completed Feb 9 09:56:37.464275 waagent[1532]: 2024-02-09T09:56:37.464218Z INFO Daemon Daemon Fetched new vmSettings [correlation ID: 233834b6-1028-49f7-a41e-e7b4d4a109cc New eTag: 987114086535572408] Feb 9 09:56:37.475433 waagent[1532]: 2024-02-09T09:56:37.475350Z INFO Daemon Daemon Status Blob type 'None' is not valid, assuming BlockBlob Feb 9 09:56:37.492237 waagent[1532]: 2024-02-09T09:56:37.492154Z INFO Daemon Daemon Starting provisioning Feb 9 09:56:37.498202 waagent[1532]: 2024-02-09T09:56:37.498127Z INFO Daemon Daemon Handle ovf-env.xml. Feb 9 09:56:37.503774 waagent[1532]: 2024-02-09T09:56:37.503704Z INFO Daemon Daemon Set hostname [ci-3510.3.2-a-d10cdd880c] Feb 9 09:56:37.527085 waagent[1532]: 2024-02-09T09:56:37.526957Z INFO Daemon Daemon Publish hostname [ci-3510.3.2-a-d10cdd880c] Feb 9 09:56:37.533936 waagent[1532]: 2024-02-09T09:56:37.533847Z INFO Daemon Daemon Examine /proc/net/route for primary interface Feb 9 09:56:37.540711 waagent[1532]: 2024-02-09T09:56:37.540637Z INFO Daemon Daemon Primary interface is [eth0] Feb 9 09:56:37.556942 systemd[1]: systemd-networkd-wait-online.service: Deactivated successfully. Feb 9 09:56:37.557166 systemd[1]: Stopped systemd-networkd-wait-online.service. Feb 9 09:56:37.557228 systemd[1]: Stopping systemd-networkd-wait-online.service... Feb 9 09:56:37.557452 systemd[1]: Stopping systemd-networkd.service... Feb 9 09:56:37.562357 systemd-networkd[1273]: eth0: DHCPv6 lease lost Feb 9 09:56:37.563659 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 9 09:56:37.563905 systemd[1]: Stopped systemd-networkd.service. Feb 9 09:56:37.565884 systemd[1]: Starting systemd-networkd.service... Feb 9 09:56:37.597611 systemd-networkd[1590]: enP51079s1: Link UP Feb 9 09:56:37.597622 systemd-networkd[1590]: enP51079s1: Gained carrier Feb 9 09:56:37.598524 systemd-networkd[1590]: eth0: Link UP Feb 9 09:56:37.598532 systemd-networkd[1590]: eth0: Gained carrier Feb 9 09:56:37.598838 systemd-networkd[1590]: lo: Link UP Feb 9 09:56:37.598846 systemd-networkd[1590]: lo: Gained carrier Feb 9 09:56:37.599065 systemd-networkd[1590]: eth0: Gained IPv6LL Feb 9 09:56:37.600075 systemd-networkd[1590]: Enumeration completed Feb 9 09:56:37.600191 systemd[1]: Started systemd-networkd.service. Feb 9 09:56:37.601624 waagent[1532]: 2024-02-09T09:56:37.601480Z INFO Daemon Daemon Create user account if not exists Feb 9 09:56:37.602018 systemd[1]: Starting systemd-networkd-wait-online.service... Feb 9 09:56:37.608964 waagent[1532]: 2024-02-09T09:56:37.608793Z INFO Daemon Daemon User core already exists, skip useradd Feb 9 09:56:37.615659 waagent[1532]: 2024-02-09T09:56:37.615580Z INFO Daemon Daemon Configure sudoer Feb 9 09:56:37.616668 systemd-networkd[1590]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 9 09:56:37.621874 waagent[1532]: 2024-02-09T09:56:37.621789Z INFO Daemon Daemon Configure sshd Feb 9 09:56:37.627191 waagent[1532]: 2024-02-09T09:56:37.627121Z INFO Daemon Daemon Deploy ssh public key. Feb 9 09:56:37.650435 systemd-networkd[1590]: eth0: DHCPv4 address 10.200.20.12/24, gateway 10.200.20.1 acquired from 168.63.129.16 Feb 9 09:56:37.653995 systemd[1]: Finished systemd-networkd-wait-online.service. Feb 9 09:56:38.845786 waagent[1532]: 2024-02-09T09:56:38.845712Z INFO Daemon Daemon Provisioning complete Feb 9 09:56:38.870509 waagent[1532]: 2024-02-09T09:56:38.870442Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Feb 9 09:56:38.877880 waagent[1532]: 2024-02-09T09:56:38.877800Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Feb 9 09:56:38.888575 waagent[1532]: 2024-02-09T09:56:38.888497Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.6.0.2 is the most current agent Feb 9 09:56:39.190735 waagent[1600]: 2024-02-09T09:56:39.190592Z INFO ExtHandler ExtHandler Agent WALinuxAgent-2.6.0.2 is running as the goal state agent Feb 9 09:56:39.191833 waagent[1600]: 2024-02-09T09:56:39.191779Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Feb 9 09:56:39.192072 waagent[1600]: 2024-02-09T09:56:39.192025Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Feb 9 09:56:39.205084 waagent[1600]: 2024-02-09T09:56:39.205011Z INFO ExtHandler ExtHandler Forcing an update of the goal state.. Feb 9 09:56:39.205422 waagent[1600]: 2024-02-09T09:56:39.205371Z INFO ExtHandler ExtHandler Fetching goal state [incarnation 1] Feb 9 09:56:39.277293 waagent[1600]: 2024-02-09T09:56:39.277160Z INFO ExtHandler ExtHandler Found private key matching thumbprint 4CEC6D1E1804177751CDFCBAB23B501FE8D32F34 Feb 9 09:56:39.277692 waagent[1600]: 2024-02-09T09:56:39.277637Z INFO ExtHandler ExtHandler Certificate with thumbprint EC09F9756EAC6C27A60BFD87E863BBBCA0CFB4B8 has no matching private key. Feb 9 09:56:39.278013 waagent[1600]: 2024-02-09T09:56:39.277965Z INFO ExtHandler ExtHandler Fetch goal state completed Feb 9 09:56:39.295967 waagent[1600]: 2024-02-09T09:56:39.295910Z INFO ExtHandler ExtHandler Fetched new vmSettings [correlation ID: 8a4c6880-ba4d-4e83-8672-8d7319f07739 New eTag: 987114086535572408] Feb 9 09:56:39.296759 waagent[1600]: 2024-02-09T09:56:39.296703Z INFO ExtHandler ExtHandler Status Blob type 'None' is not valid, assuming BlockBlob Feb 9 09:56:39.338145 waagent[1600]: 2024-02-09T09:56:39.338016Z INFO ExtHandler ExtHandler Distro: flatcar-3510.3.2; OSUtil: CoreOSUtil; AgentService: waagent; Python: 3.9.16; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.20.1; Feb 9 09:56:39.352602 waagent[1600]: 2024-02-09T09:56:39.352516Z INFO ExtHandler ExtHandler WALinuxAgent-2.6.0.2 running as process 1600 Feb 9 09:56:39.356527 waagent[1600]: 2024-02-09T09:56:39.356463Z INFO ExtHandler ExtHandler Cgroup monitoring is not supported on ['flatcar', '3510.3.2', '', 'Flatcar Container Linux by Kinvolk'] Feb 9 09:56:39.357966 waagent[1600]: 2024-02-09T09:56:39.357911Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Feb 9 09:56:39.389427 waagent[1600]: 2024-02-09T09:56:39.389365Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Feb 9 09:56:39.389981 waagent[1600]: 2024-02-09T09:56:39.389926Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Feb 9 09:56:39.397705 waagent[1600]: 2024-02-09T09:56:39.397650Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Feb 9 09:56:39.398349 waagent[1600]: 2024-02-09T09:56:39.398277Z ERROR ExtHandler ExtHandler Unable to setup the persistent firewall rules: [Errno 30] Read-only file system: '/lib/systemd/system/waagent-network-setup.service' Feb 9 09:56:39.399579 waagent[1600]: 2024-02-09T09:56:39.399518Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: configuration enabled [False], cgroups enabled [False], python supported: [True] Feb 9 09:56:39.401033 waagent[1600]: 2024-02-09T09:56:39.400963Z INFO ExtHandler ExtHandler Starting env monitor service. Feb 9 09:56:39.401314 waagent[1600]: 2024-02-09T09:56:39.401234Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Feb 9 09:56:39.401876 waagent[1600]: 2024-02-09T09:56:39.401802Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Feb 9 09:56:39.402489 waagent[1600]: 2024-02-09T09:56:39.402422Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Feb 9 09:56:39.403128 waagent[1600]: 2024-02-09T09:56:39.403060Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Feb 9 09:56:39.403673 waagent[1600]: 2024-02-09T09:56:39.403602Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Feb 9 09:56:39.403673 waagent[1600]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Feb 9 09:56:39.403673 waagent[1600]: eth0 00000000 0114C80A 0003 0 0 1024 00000000 0 0 0 Feb 9 09:56:39.403673 waagent[1600]: eth0 0014C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Feb 9 09:56:39.403673 waagent[1600]: eth0 0114C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Feb 9 09:56:39.403673 waagent[1600]: eth0 10813FA8 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Feb 9 09:56:39.403673 waagent[1600]: eth0 FEA9FEA9 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Feb 9 09:56:39.404060 waagent[1600]: 2024-02-09T09:56:39.403997Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Feb 9 09:56:39.404726 waagent[1600]: 2024-02-09T09:56:39.404625Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Feb 9 09:56:39.404884 waagent[1600]: 2024-02-09T09:56:39.404832Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Feb 9 09:56:39.405015 waagent[1600]: 2024-02-09T09:56:39.404963Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Feb 9 09:56:39.407968 waagent[1600]: 2024-02-09T09:56:39.407836Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Feb 9 09:56:39.408548 waagent[1600]: 2024-02-09T09:56:39.408471Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Feb 9 09:56:39.409260 waagent[1600]: 2024-02-09T09:56:39.409193Z INFO EnvHandler ExtHandler Configure routes Feb 9 09:56:39.409776 waagent[1600]: 2024-02-09T09:56:39.409699Z INFO EnvHandler ExtHandler Gateway:None Feb 9 09:56:39.411001 waagent[1600]: 2024-02-09T09:56:39.410940Z INFO EnvHandler ExtHandler Routes:None Feb 9 09:56:39.412181 waagent[1600]: 2024-02-09T09:56:39.412113Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Feb 9 09:56:39.425777 waagent[1600]: 2024-02-09T09:56:39.425577Z INFO ExtHandler ExtHandler Checking for agent updates (family: Prod) Feb 9 09:56:39.426866 waagent[1600]: 2024-02-09T09:56:39.426819Z WARNING ExtHandler ExtHandler Fetch failed: [HttpError] HTTPS is unavailable and required Feb 9 09:56:39.427960 waagent[1600]: 2024-02-09T09:56:39.427904Z INFO ExtHandler ExtHandler [PERIODIC] Request failed using the direct channel. Error: 'NoneType' object has no attribute 'getheaders' Feb 9 09:56:39.439417 waagent[1600]: 2024-02-09T09:56:39.439348Z ERROR EnvHandler ExtHandler Failed to get the PID of the DHCP client: invalid literal for int() with base 10: 'MainPID=1590' Feb 9 09:56:39.452675 waagent[1600]: 2024-02-09T09:56:39.452487Z INFO MonitorHandler ExtHandler Network interfaces: Feb 9 09:56:39.452675 waagent[1600]: Executing ['ip', '-a', '-o', 'link']: Feb 9 09:56:39.452675 waagent[1600]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Feb 9 09:56:39.452675 waagent[1600]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 00:22:48:bb:d9:16 brd ff:ff:ff:ff:ff:ff Feb 9 09:56:39.452675 waagent[1600]: 3: enP51079s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 00:22:48:bb:d9:16 brd ff:ff:ff:ff:ff:ff\ altname enP51079p0s2 Feb 9 09:56:39.452675 waagent[1600]: Executing ['ip', '-4', '-a', '-o', 'address']: Feb 9 09:56:39.452675 waagent[1600]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Feb 9 09:56:39.452675 waagent[1600]: 2: eth0 inet 10.200.20.12/24 metric 1024 brd 10.200.20.255 scope global eth0\ valid_lft forever preferred_lft forever Feb 9 09:56:39.452675 waagent[1600]: Executing ['ip', '-6', '-a', '-o', 'address']: Feb 9 09:56:39.452675 waagent[1600]: 1: lo inet6 ::1/128 scope host \ valid_lft forever preferred_lft forever Feb 9 09:56:39.452675 waagent[1600]: 2: eth0 inet6 fe80::222:48ff:febb:d916/64 scope link \ valid_lft forever preferred_lft forever Feb 9 09:56:39.472737 waagent[1600]: 2024-02-09T09:56:39.472637Z INFO ExtHandler ExtHandler Default channel changed to HostGA channel. Feb 9 09:56:39.586461 waagent[1600]: 2024-02-09T09:56:39.586259Z INFO EnvHandler ExtHandler Successfully added Azure fabric firewall rules Feb 9 09:56:39.589866 waagent[1600]: 2024-02-09T09:56:39.589738Z INFO EnvHandler ExtHandler Firewall rules: Feb 9 09:56:39.589866 waagent[1600]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Feb 9 09:56:39.589866 waagent[1600]: pkts bytes target prot opt in out source destination Feb 9 09:56:39.589866 waagent[1600]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Feb 9 09:56:39.589866 waagent[1600]: pkts bytes target prot opt in out source destination Feb 9 09:56:39.589866 waagent[1600]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Feb 9 09:56:39.589866 waagent[1600]: pkts bytes target prot opt in out source destination Feb 9 09:56:39.589866 waagent[1600]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Feb 9 09:56:39.589866 waagent[1600]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Feb 9 09:56:39.591367 waagent[1600]: 2024-02-09T09:56:39.591289Z INFO EnvHandler ExtHandler Set block dev timeout: sda with timeout: 300 Feb 9 09:56:39.625396 waagent[1600]: 2024-02-09T09:56:39.625282Z INFO ExtHandler ExtHandler Agent WALinuxAgent-2.6.0.2 discovered update WALinuxAgent-2.9.1.1 -- exiting Feb 9 09:56:39.891984 waagent[1532]: 2024-02-09T09:56:39.891819Z INFO Daemon Daemon Agent WALinuxAgent-2.6.0.2 launched with command '/usr/share/oem/python/bin/python -u /usr/share/oem/bin/waagent -run-exthandlers' is successfully running Feb 9 09:56:39.895654 waagent[1532]: 2024-02-09T09:56:39.895595Z INFO Daemon Daemon Determined Agent WALinuxAgent-2.9.1.1 to be the latest agent Feb 9 09:56:41.041903 waagent[1638]: 2024-02-09T09:56:41.041802Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.9.1.1) Feb 9 09:56:41.042956 waagent[1638]: 2024-02-09T09:56:41.042900Z INFO ExtHandler ExtHandler OS: flatcar 3510.3.2 Feb 9 09:56:41.043186 waagent[1638]: 2024-02-09T09:56:41.043138Z INFO ExtHandler ExtHandler Python: 3.9.16 Feb 9 09:56:41.051251 waagent[1638]: 2024-02-09T09:56:41.051143Z INFO ExtHandler ExtHandler Distro: flatcar-3510.3.2; OSUtil: CoreOSUtil; AgentService: waagent; Python: 3.9.16; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.20.1; Feb 9 09:56:41.051803 waagent[1638]: 2024-02-09T09:56:41.051749Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Feb 9 09:56:41.052036 waagent[1638]: 2024-02-09T09:56:41.051988Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Feb 9 09:56:41.065420 waagent[1638]: 2024-02-09T09:56:41.065337Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Feb 9 09:56:41.075353 waagent[1638]: 2024-02-09T09:56:41.075275Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.143 Feb 9 09:56:41.076549 waagent[1638]: 2024-02-09T09:56:41.076493Z INFO ExtHandler Feb 9 09:56:41.076794 waagent[1638]: 2024-02-09T09:56:41.076745Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: 28497c12-961b-403f-93d4-7560f351109c eTag: 987114086535572408 source: Fabric] Feb 9 09:56:41.077636 waagent[1638]: 2024-02-09T09:56:41.077580Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Feb 9 09:56:41.078963 waagent[1638]: 2024-02-09T09:56:41.078905Z INFO ExtHandler Feb 9 09:56:41.079177 waagent[1638]: 2024-02-09T09:56:41.079130Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Feb 9 09:56:41.085740 waagent[1638]: 2024-02-09T09:56:41.085696Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Feb 9 09:56:41.086287 waagent[1638]: 2024-02-09T09:56:41.086243Z WARNING ExtHandler ExtHandler Fetch failed: [HttpError] HTTPS is unavailable and required Feb 9 09:56:41.120717 waagent[1638]: 2024-02-09T09:56:41.120652Z INFO ExtHandler ExtHandler Default channel changed to HostGAPlugin channel. Feb 9 09:56:41.195994 waagent[1638]: 2024-02-09T09:56:41.195846Z INFO ExtHandler Downloaded certificate {'thumbprint': 'EC09F9756EAC6C27A60BFD87E863BBBCA0CFB4B8', 'hasPrivateKey': False} Feb 9 09:56:41.197315 waagent[1638]: 2024-02-09T09:56:41.197238Z INFO ExtHandler Downloaded certificate {'thumbprint': '4CEC6D1E1804177751CDFCBAB23B501FE8D32F34', 'hasPrivateKey': True} Feb 9 09:56:41.198592 waagent[1638]: 2024-02-09T09:56:41.198527Z INFO ExtHandler Fetch goal state completed Feb 9 09:56:41.228131 waagent[1638]: 2024-02-09T09:56:41.228052Z INFO ExtHandler ExtHandler WALinuxAgent-2.9.1.1 running as process 1638 Feb 9 09:56:41.231857 waagent[1638]: 2024-02-09T09:56:41.231781Z INFO ExtHandler ExtHandler Cgroup monitoring is not supported on ['flatcar', '3510.3.2', '', 'Flatcar Container Linux by Kinvolk'] Feb 9 09:56:41.233492 waagent[1638]: 2024-02-09T09:56:41.233427Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Feb 9 09:56:41.238733 waagent[1638]: 2024-02-09T09:56:41.238678Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Feb 9 09:56:41.239266 waagent[1638]: 2024-02-09T09:56:41.239211Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Feb 9 09:56:41.247276 waagent[1638]: 2024-02-09T09:56:41.247218Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Feb 9 09:56:41.248000 waagent[1638]: 2024-02-09T09:56:41.247940Z ERROR ExtHandler ExtHandler Unable to setup the persistent firewall rules: [Errno 30] Read-only file system: '/lib/systemd/system/waagent-network-setup.service' Feb 9 09:56:41.268815 waagent[1638]: 2024-02-09T09:56:41.268677Z INFO ExtHandler ExtHandler Firewall rule to allow DNS TCP request to wireserver for a non root user unavailable. Setting it now. Feb 9 09:56:41.272079 waagent[1638]: 2024-02-09T09:56:41.271947Z INFO ExtHandler ExtHandler Succesfully added firewall rule to allow non root users to do a DNS TCP request to wireserver Feb 9 09:56:41.276071 waagent[1638]: 2024-02-09T09:56:41.276008Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: configuration enabled [True], cgroups enabled [False], python supported: [True] Feb 9 09:56:41.277955 waagent[1638]: 2024-02-09T09:56:41.277884Z INFO ExtHandler ExtHandler Starting env monitor service. Feb 9 09:56:41.278220 waagent[1638]: 2024-02-09T09:56:41.278149Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Feb 9 09:56:41.278791 waagent[1638]: 2024-02-09T09:56:41.278720Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Feb 9 09:56:41.279447 waagent[1638]: 2024-02-09T09:56:41.279367Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Feb 9 09:56:41.280031 waagent[1638]: 2024-02-09T09:56:41.279963Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Feb 9 09:56:41.280746 waagent[1638]: 2024-02-09T09:56:41.280658Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Feb 9 09:56:41.280746 waagent[1638]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Feb 9 09:56:41.280746 waagent[1638]: eth0 00000000 0114C80A 0003 0 0 1024 00000000 0 0 0 Feb 9 09:56:41.280746 waagent[1638]: eth0 0014C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Feb 9 09:56:41.280746 waagent[1638]: eth0 0114C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Feb 9 09:56:41.280746 waagent[1638]: eth0 10813FA8 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Feb 9 09:56:41.280746 waagent[1638]: eth0 FEA9FEA9 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Feb 9 09:56:41.281049 waagent[1638]: 2024-02-09T09:56:41.280979Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Feb 9 09:56:41.281204 waagent[1638]: 2024-02-09T09:56:41.281113Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Feb 9 09:56:41.281346 waagent[1638]: 2024-02-09T09:56:41.281264Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Feb 9 09:56:41.283952 waagent[1638]: 2024-02-09T09:56:41.283775Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Feb 9 09:56:41.285224 waagent[1638]: 2024-02-09T09:56:41.285151Z INFO EnvHandler ExtHandler Configure routes Feb 9 09:56:41.286242 waagent[1638]: 2024-02-09T09:56:41.286180Z INFO EnvHandler ExtHandler Gateway:None Feb 9 09:56:41.286808 waagent[1638]: 2024-02-09T09:56:41.286755Z INFO EnvHandler ExtHandler Routes:None Feb 9 09:56:41.287002 waagent[1638]: 2024-02-09T09:56:41.286942Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Feb 9 09:56:41.287062 waagent[1638]: 2024-02-09T09:56:41.286658Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Feb 9 09:56:41.290405 waagent[1638]: 2024-02-09T09:56:41.290279Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Feb 9 09:56:41.314936 waagent[1638]: 2024-02-09T09:56:41.314809Z INFO MonitorHandler ExtHandler Network interfaces: Feb 9 09:56:41.314936 waagent[1638]: Executing ['ip', '-a', '-o', 'link']: Feb 9 09:56:41.314936 waagent[1638]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Feb 9 09:56:41.314936 waagent[1638]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 00:22:48:bb:d9:16 brd ff:ff:ff:ff:ff:ff Feb 9 09:56:41.314936 waagent[1638]: 3: enP51079s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 00:22:48:bb:d9:16 brd ff:ff:ff:ff:ff:ff\ altname enP51079p0s2 Feb 9 09:56:41.314936 waagent[1638]: Executing ['ip', '-4', '-a', '-o', 'address']: Feb 9 09:56:41.314936 waagent[1638]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Feb 9 09:56:41.314936 waagent[1638]: 2: eth0 inet 10.200.20.12/24 metric 1024 brd 10.200.20.255 scope global eth0\ valid_lft forever preferred_lft forever Feb 9 09:56:41.314936 waagent[1638]: Executing ['ip', '-6', '-a', '-o', 'address']: Feb 9 09:56:41.314936 waagent[1638]: 1: lo inet6 ::1/128 scope host \ valid_lft forever preferred_lft forever Feb 9 09:56:41.314936 waagent[1638]: 2: eth0 inet6 fe80::222:48ff:febb:d916/64 scope link \ valid_lft forever preferred_lft forever Feb 9 09:56:41.325454 waagent[1638]: 2024-02-09T09:56:41.325290Z INFO ExtHandler ExtHandler No requested version specified, checking for all versions for agent update (family: Prod) Feb 9 09:56:41.326803 waagent[1638]: 2024-02-09T09:56:41.326717Z INFO ExtHandler ExtHandler Downloading manifest Feb 9 09:56:41.360132 waagent[1638]: 2024-02-09T09:56:41.359898Z INFO ExtHandler ExtHandler Feb 9 09:56:41.360879 waagent[1638]: 2024-02-09T09:56:41.360798Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: 57424a95-c573-4fa5-8abf-f488797f7c65 correlation 5b946e7a-0b89-4579-a82d-25f004f832f5 created: 2024-02-09T09:55:51.870888Z] Feb 9 09:56:41.361933 waagent[1638]: 2024-02-09T09:56:41.361857Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Feb 9 09:56:41.372727 waagent[1638]: 2024-02-09T09:56:41.372637Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 12 ms] Feb 9 09:56:41.387317 waagent[1638]: 2024-02-09T09:56:41.387223Z INFO EnvHandler ExtHandler Current Firewall rules: Feb 9 09:56:41.387317 waagent[1638]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Feb 9 09:56:41.387317 waagent[1638]: pkts bytes target prot opt in out source destination Feb 9 09:56:41.387317 waagent[1638]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Feb 9 09:56:41.387317 waagent[1638]: pkts bytes target prot opt in out source destination Feb 9 09:56:41.387317 waagent[1638]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Feb 9 09:56:41.387317 waagent[1638]: pkts bytes target prot opt in out source destination Feb 9 09:56:41.387317 waagent[1638]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Feb 9 09:56:41.387317 waagent[1638]: 117 14460 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Feb 9 09:56:41.387317 waagent[1638]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Feb 9 09:56:41.401839 waagent[1638]: 2024-02-09T09:56:41.401764Z INFO ExtHandler ExtHandler Looking for existing remote access users. Feb 9 09:56:41.413364 waagent[1638]: 2024-02-09T09:56:41.413261Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.9.1.1 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: 4ED7066A-C27C-45FE-8CFE-6238EFD648FC;DroppedPackets: 0;UpdateGSErrors: 0;AutoUpdate: 1] Feb 9 09:57:16.064627 kernel: hv_balloon: Max. dynamic memory size: 4096 MB Feb 9 09:57:17.777699 update_engine[1431]: I0209 09:57:17.777346 1431 update_attempter.cc:509] Updating boot flags... Feb 9 09:57:28.602835 systemd[1]: Created slice system-sshd.slice. Feb 9 09:57:28.604043 systemd[1]: Started sshd@0-10.200.20.12:22-10.200.12.6:43888.service. Feb 9 09:57:29.077963 sshd[1748]: Accepted publickey for core from 10.200.12.6 port 43888 ssh2: RSA SHA256:hRMUPydBxGv30tVhZeI0xMvD4t7pzUsgMKn9+EnuIn0 Feb 9 09:57:29.083372 sshd[1748]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:57:29.087762 systemd[1]: Started session-3.scope. Feb 9 09:57:29.088370 systemd-logind[1429]: New session 3 of user core. Feb 9 09:57:29.453008 systemd[1]: Started sshd@1-10.200.20.12:22-10.200.12.6:43894.service. Feb 9 09:57:29.876378 sshd[1753]: Accepted publickey for core from 10.200.12.6 port 43894 ssh2: RSA SHA256:hRMUPydBxGv30tVhZeI0xMvD4t7pzUsgMKn9+EnuIn0 Feb 9 09:57:29.877966 sshd[1753]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:57:29.881348 systemd-logind[1429]: New session 4 of user core. Feb 9 09:57:29.881979 systemd[1]: Started session-4.scope. Feb 9 09:57:30.185681 sshd[1753]: pam_unix(sshd:session): session closed for user core Feb 9 09:57:30.188615 systemd[1]: sshd@1-10.200.20.12:22-10.200.12.6:43894.service: Deactivated successfully. Feb 9 09:57:30.188772 systemd-logind[1429]: Session 4 logged out. Waiting for processes to exit. Feb 9 09:57:30.189340 systemd[1]: session-4.scope: Deactivated successfully. Feb 9 09:57:30.189961 systemd-logind[1429]: Removed session 4. Feb 9 09:57:30.254353 systemd[1]: Started sshd@2-10.200.20.12:22-10.200.12.6:43906.service. Feb 9 09:57:30.672265 sshd[1760]: Accepted publickey for core from 10.200.12.6 port 43906 ssh2: RSA SHA256:hRMUPydBxGv30tVhZeI0xMvD4t7pzUsgMKn9+EnuIn0 Feb 9 09:57:30.673859 sshd[1760]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:57:30.677351 systemd-logind[1429]: New session 5 of user core. Feb 9 09:57:30.677897 systemd[1]: Started session-5.scope. Feb 9 09:57:30.974421 sshd[1760]: pam_unix(sshd:session): session closed for user core Feb 9 09:57:30.976609 systemd[1]: sshd@2-10.200.20.12:22-10.200.12.6:43906.service: Deactivated successfully. Feb 9 09:57:30.977348 systemd[1]: session-5.scope: Deactivated successfully. Feb 9 09:57:30.978223 systemd-logind[1429]: Session 5 logged out. Waiting for processes to exit. Feb 9 09:57:30.978987 systemd-logind[1429]: Removed session 5. Feb 9 09:57:31.044011 systemd[1]: Started sshd@3-10.200.20.12:22-10.200.12.6:43910.service. Feb 9 09:57:31.466910 sshd[1767]: Accepted publickey for core from 10.200.12.6 port 43910 ssh2: RSA SHA256:hRMUPydBxGv30tVhZeI0xMvD4t7pzUsgMKn9+EnuIn0 Feb 9 09:57:31.468516 sshd[1767]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:57:31.472572 systemd[1]: Started session-6.scope. Feb 9 09:57:31.473018 systemd-logind[1429]: New session 6 of user core. Feb 9 09:57:31.776010 sshd[1767]: pam_unix(sshd:session): session closed for user core Feb 9 09:57:31.778518 systemd[1]: sshd@3-10.200.20.12:22-10.200.12.6:43910.service: Deactivated successfully. Feb 9 09:57:31.779218 systemd[1]: session-6.scope: Deactivated successfully. Feb 9 09:57:31.780285 systemd-logind[1429]: Session 6 logged out. Waiting for processes to exit. Feb 9 09:57:31.781014 systemd-logind[1429]: Removed session 6. Feb 9 09:57:31.845732 systemd[1]: Started sshd@4-10.200.20.12:22-10.200.12.6:43914.service. Feb 9 09:57:32.269126 sshd[1774]: Accepted publickey for core from 10.200.12.6 port 43914 ssh2: RSA SHA256:hRMUPydBxGv30tVhZeI0xMvD4t7pzUsgMKn9+EnuIn0 Feb 9 09:57:32.270723 sshd[1774]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:57:32.274388 systemd-logind[1429]: New session 7 of user core. Feb 9 09:57:32.274771 systemd[1]: Started session-7.scope. Feb 9 09:57:32.580440 sudo[1778]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Feb 9 09:57:32.581281 sudo[1778]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Feb 9 09:57:32.599220 dbus-daemon[1396]: avc: received setenforce notice (enforcing=1) Feb 9 09:57:32.599945 sudo[1778]: pam_unix(sudo:session): session closed for user root Feb 9 09:57:32.671551 sshd[1774]: pam_unix(sshd:session): session closed for user core Feb 9 09:57:32.674409 systemd-logind[1429]: Session 7 logged out. Waiting for processes to exit. Feb 9 09:57:32.674674 systemd[1]: sshd@4-10.200.20.12:22-10.200.12.6:43914.service: Deactivated successfully. Feb 9 09:57:32.675486 systemd[1]: session-7.scope: Deactivated successfully. Feb 9 09:57:32.675944 systemd-logind[1429]: Removed session 7. Feb 9 09:57:32.739218 systemd[1]: Started sshd@5-10.200.20.12:22-10.200.12.6:43930.service. Feb 9 09:57:33.158893 sshd[1782]: Accepted publickey for core from 10.200.12.6 port 43930 ssh2: RSA SHA256:hRMUPydBxGv30tVhZeI0xMvD4t7pzUsgMKn9+EnuIn0 Feb 9 09:57:33.159446 sshd[1782]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:57:33.163078 systemd-logind[1429]: New session 8 of user core. Feb 9 09:57:33.163553 systemd[1]: Started session-8.scope. Feb 9 09:57:33.397555 sudo[1787]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Feb 9 09:57:33.397751 sudo[1787]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Feb 9 09:57:33.400288 sudo[1787]: pam_unix(sudo:session): session closed for user root Feb 9 09:57:33.404577 sudo[1786]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Feb 9 09:57:33.405039 sudo[1786]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Feb 9 09:57:33.413497 systemd[1]: Stopping audit-rules.service... Feb 9 09:57:33.413000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Feb 9 09:57:33.417528 auditctl[1790]: No rules Feb 9 09:57:33.418997 kernel: kauditd_printk_skb: 31 callbacks suppressed Feb 9 09:57:33.419044 kernel: audit: type=1305 audit(1707472653.413:135): auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Feb 9 09:57:33.419342 systemd[1]: audit-rules.service: Deactivated successfully. Feb 9 09:57:33.419577 systemd[1]: Stopped audit-rules.service. Feb 9 09:57:33.421368 systemd[1]: Starting audit-rules.service... Feb 9 09:57:33.413000 audit[1790]: SYSCALL arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffcca81810 a2=420 a3=0 items=0 ppid=1 pid=1790 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:57:33.453570 kernel: audit: type=1300 audit(1707472653.413:135): arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffcca81810 a2=420 a3=0 items=0 ppid=1 pid=1790 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:57:33.413000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D44 Feb 9 09:57:33.460972 kernel: audit: type=1327 audit(1707472653.413:135): proctitle=2F7362696E2F617564697463746C002D44 Feb 9 09:57:33.418000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:57:33.468186 augenrules[1808]: No rules Feb 9 09:57:33.469217 systemd[1]: Finished audit-rules.service. Feb 9 09:57:33.477833 kernel: audit: type=1131 audit(1707472653.418:136): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:57:33.478058 sudo[1786]: pam_unix(sudo:session): session closed for user root Feb 9 09:57:33.468000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:57:33.494950 kernel: audit: type=1130 audit(1707472653.468:137): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:57:33.476000 audit[1786]: USER_END pid=1786 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Feb 9 09:57:33.514334 kernel: audit: type=1106 audit(1707472653.476:138): pid=1786 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Feb 9 09:57:33.476000 audit[1786]: CRED_DISP pid=1786 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Feb 9 09:57:33.532331 kernel: audit: type=1104 audit(1707472653.476:139): pid=1786 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Feb 9 09:57:33.560518 sshd[1782]: pam_unix(sshd:session): session closed for user core Feb 9 09:57:33.560000 audit[1782]: USER_END pid=1782 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 09:57:33.585229 systemd[1]: sshd@5-10.200.20.12:22-10.200.12.6:43930.service: Deactivated successfully. Feb 9 09:57:33.560000 audit[1782]: CRED_DISP pid=1782 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 09:57:33.603837 kernel: audit: type=1106 audit(1707472653.560:140): pid=1782 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 09:57:33.603959 kernel: audit: type=1104 audit(1707472653.560:141): pid=1782 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 09:57:33.586009 systemd[1]: session-8.scope: Deactivated successfully. Feb 9 09:57:33.584000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-10.200.20.12:22-10.200.12.6:43930 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:57:33.622976 kernel: audit: type=1131 audit(1707472653.584:142): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-10.200.20.12:22-10.200.12.6:43930 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:57:33.623169 systemd-logind[1429]: Session 8 logged out. Waiting for processes to exit. Feb 9 09:57:33.624270 systemd-logind[1429]: Removed session 8. Feb 9 09:57:33.629191 systemd[1]: Started sshd@6-10.200.20.12:22-10.200.12.6:43940.service. Feb 9 09:57:33.628000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.200.20.12:22-10.200.12.6:43940 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:57:34.049000 audit[1815]: USER_ACCT pid=1815 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 09:57:34.050946 sshd[1815]: Accepted publickey for core from 10.200.12.6 port 43940 ssh2: RSA SHA256:hRMUPydBxGv30tVhZeI0xMvD4t7pzUsgMKn9+EnuIn0 Feb 9 09:57:34.050000 audit[1815]: CRED_ACQ pid=1815 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 09:57:34.050000 audit[1815]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffe7abe1a0 a2=3 a3=1 items=0 ppid=1 pid=1815 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=9 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:57:34.050000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Feb 9 09:57:34.052525 sshd[1815]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:57:34.056154 systemd-logind[1429]: New session 9 of user core. Feb 9 09:57:34.056601 systemd[1]: Started session-9.scope. Feb 9 09:57:34.059000 audit[1815]: USER_START pid=1815 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 09:57:34.060000 audit[1818]: CRED_ACQ pid=1818 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 09:57:34.290000 audit[1819]: USER_ACCT pid=1819 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Feb 9 09:57:34.292320 sudo[1819]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Feb 9 09:57:34.291000 audit[1819]: CRED_REFR pid=1819 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Feb 9 09:57:34.292891 sudo[1819]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Feb 9 09:57:34.293000 audit[1819]: USER_START pid=1819 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Feb 9 09:57:34.863337 systemd[1]: Starting docker.service... Feb 9 09:57:34.896777 env[1834]: time="2024-02-09T09:57:34.896720554Z" level=info msg="Starting up" Feb 9 09:57:34.901571 env[1834]: time="2024-02-09T09:57:34.901532978Z" level=info msg="parsed scheme: \"unix\"" module=grpc Feb 9 09:57:34.901571 env[1834]: time="2024-02-09T09:57:34.901558138Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Feb 9 09:57:34.901703 env[1834]: time="2024-02-09T09:57:34.901583738Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Feb 9 09:57:34.901703 env[1834]: time="2024-02-09T09:57:34.901595098Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Feb 9 09:57:34.903446 env[1834]: time="2024-02-09T09:57:34.903419147Z" level=info msg="parsed scheme: \"unix\"" module=grpc Feb 9 09:57:34.903557 env[1834]: time="2024-02-09T09:57:34.903540868Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Feb 9 09:57:34.903626 env[1834]: time="2024-02-09T09:57:34.903610468Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Feb 9 09:57:34.903755 env[1834]: time="2024-02-09T09:57:34.903667628Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Feb 9 09:57:34.908657 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport1778162659-merged.mount: Deactivated successfully. Feb 9 09:57:35.980195 env[1834]: time="2024-02-09T09:57:35.980156504Z" level=warning msg="Your kernel does not support cgroup blkio weight" Feb 9 09:57:35.980195 env[1834]: time="2024-02-09T09:57:35.980186146Z" level=warning msg="Your kernel does not support cgroup blkio weight_device" Feb 9 09:57:35.980609 env[1834]: time="2024-02-09T09:57:35.980359635Z" level=info msg="Loading containers: start." Feb 9 09:57:36.039000 audit[1862]: NETFILTER_CFG table=nat:6 family=2 entries=2 op=nft_register_chain pid=1862 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 09:57:36.039000 audit[1862]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=116 a0=3 a1=ffffc56b8b70 a2=0 a3=1 items=0 ppid=1834 pid=1862 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:57:36.039000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Feb 9 09:57:36.040000 audit[1864]: NETFILTER_CFG table=filter:7 family=2 entries=2 op=nft_register_chain pid=1864 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 09:57:36.040000 audit[1864]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=124 a0=3 a1=ffffe1437050 a2=0 a3=1 items=0 ppid=1834 pid=1864 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:57:36.040000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Feb 9 09:57:36.042000 audit[1866]: NETFILTER_CFG table=filter:8 family=2 entries=1 op=nft_register_chain pid=1866 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 09:57:36.042000 audit[1866]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=112 a0=3 a1=ffffd1c60500 a2=0 a3=1 items=0 ppid=1834 pid=1866 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:57:36.042000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Feb 9 09:57:36.044000 audit[1868]: NETFILTER_CFG table=filter:9 family=2 entries=1 op=nft_register_chain pid=1868 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 09:57:36.044000 audit[1868]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=112 a0=3 a1=ffffd77519e0 a2=0 a3=1 items=0 ppid=1834 pid=1868 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:57:36.044000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Feb 9 09:57:36.046000 audit[1870]: NETFILTER_CFG table=filter:10 family=2 entries=1 op=nft_register_rule pid=1870 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 09:57:36.046000 audit[1870]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=228 a0=3 a1=ffffe18bf3f0 a2=0 a3=1 items=0 ppid=1834 pid=1870 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:57:36.046000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4100444F434B45522D49534F4C4154494F4E2D53544147452D31002D6A0052455455524E Feb 9 09:57:36.048000 audit[1872]: NETFILTER_CFG table=filter:11 family=2 entries=1 op=nft_register_rule pid=1872 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 09:57:36.048000 audit[1872]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=228 a0=3 a1=ffffee1673d0 a2=0 a3=1 items=0 ppid=1834 pid=1872 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:57:36.048000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4100444F434B45522D49534F4C4154494F4E2D53544147452D32002D6A0052455455524E Feb 9 09:57:36.164000 audit[1874]: NETFILTER_CFG table=filter:12 family=2 entries=1 op=nft_register_chain pid=1874 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 09:57:36.164000 audit[1874]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=fffffc5b3ba0 a2=0 a3=1 items=0 ppid=1834 pid=1874 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:57:36.164000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Feb 9 09:57:36.165000 audit[1876]: NETFILTER_CFG table=filter:13 family=2 entries=1 op=nft_register_rule pid=1876 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 09:57:36.165000 audit[1876]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=212 a0=3 a1=fffff9048a40 a2=0 a3=1 items=0 ppid=1834 pid=1876 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:57:36.165000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Feb 9 09:57:36.167000 audit[1878]: NETFILTER_CFG table=filter:14 family=2 entries=2 op=nft_register_chain pid=1878 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 09:57:36.167000 audit[1878]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=308 a0=3 a1=ffffe233c4e0 a2=0 a3=1 items=0 ppid=1834 pid=1878 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:57:36.167000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Feb 9 09:57:36.221000 audit[1882]: NETFILTER_CFG table=filter:15 family=2 entries=1 op=nft_unregister_rule pid=1882 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 09:57:36.221000 audit[1882]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=216 a0=3 a1=fffff2dce8a0 a2=0 a3=1 items=0 ppid=1834 pid=1882 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:57:36.221000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4400464F5257415244002D6A00444F434B45522D55534552 Feb 9 09:57:36.222000 audit[1883]: NETFILTER_CFG table=filter:16 family=2 entries=1 op=nft_register_rule pid=1883 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 09:57:36.222000 audit[1883]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=224 a0=3 a1=ffffccc61e40 a2=0 a3=1 items=0 ppid=1834 pid=1883 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:57:36.222000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Feb 9 09:57:36.283337 kernel: Initializing XFRM netlink socket Feb 9 09:57:36.294593 env[1834]: time="2024-02-09T09:57:36.294560406Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Feb 9 09:57:36.323000 audit[1891]: NETFILTER_CFG table=nat:17 family=2 entries=2 op=nft_register_chain pid=1891 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 09:57:36.323000 audit[1891]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=492 a0=3 a1=ffffc57f7db0 a2=0 a3=1 items=0 ppid=1834 pid=1891 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:57:36.323000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4900504F5354524F5554494E47002D73003137322E31372E302E302F31360000002D6F00646F636B657230002D6A004D415351554552414445 Feb 9 09:57:36.336000 audit[1894]: NETFILTER_CFG table=nat:18 family=2 entries=1 op=nft_register_rule pid=1894 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 09:57:36.336000 audit[1894]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=288 a0=3 a1=ffffd9e12f60 a2=0 a3=1 items=0 ppid=1834 pid=1894 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:57:36.336000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4900444F434B4552002D6900646F636B657230002D6A0052455455524E Feb 9 09:57:36.339000 audit[1897]: NETFILTER_CFG table=filter:19 family=2 entries=1 op=nft_register_rule pid=1897 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 09:57:36.339000 audit[1897]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=376 a0=3 a1=fffff6d92f10 a2=0 a3=1 items=0 ppid=1834 pid=1897 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:57:36.339000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6900646F636B657230002D6F00646F636B657230002D6A00414343455054 Feb 9 09:57:36.341000 audit[1899]: NETFILTER_CFG table=filter:20 family=2 entries=1 op=nft_register_rule pid=1899 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 09:57:36.341000 audit[1899]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=376 a0=3 a1=ffffe7e185f0 a2=0 a3=1 items=0 ppid=1834 pid=1899 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:57:36.341000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6900646F636B6572300000002D6F00646F636B657230002D6A00414343455054 Feb 9 09:57:36.343000 audit[1901]: NETFILTER_CFG table=nat:21 family=2 entries=2 op=nft_register_chain pid=1901 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 09:57:36.343000 audit[1901]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=356 a0=3 a1=ffffde954ef0 a2=0 a3=1 items=0 ppid=1834 pid=1901 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:57:36.343000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Feb 9 09:57:36.345000 audit[1903]: NETFILTER_CFG table=nat:22 family=2 entries=2 op=nft_register_chain pid=1903 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 09:57:36.345000 audit[1903]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=444 a0=3 a1=ffffec0c3f90 a2=0 a3=1 items=0 ppid=1834 pid=1903 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:57:36.345000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003132372E302E302E302F38 Feb 9 09:57:36.347000 audit[1905]: NETFILTER_CFG table=filter:23 family=2 entries=1 op=nft_register_rule pid=1905 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 09:57:36.347000 audit[1905]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=304 a0=3 a1=fffff0885060 a2=0 a3=1 items=0 ppid=1834 pid=1905 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:57:36.347000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6F00646F636B657230002D6A00444F434B4552 Feb 9 09:57:36.349000 audit[1907]: NETFILTER_CFG table=filter:24 family=2 entries=1 op=nft_register_rule pid=1907 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 09:57:36.349000 audit[1907]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=508 a0=3 a1=fffff5b54ba0 a2=0 a3=1 items=0 ppid=1834 pid=1907 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:57:36.349000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6F00646F636B657230002D6D00636F6E6E747261636B002D2D637473746174650052454C415445442C45535441424C4953484544002D6A00414343455054 Feb 9 09:57:36.351000 audit[1909]: NETFILTER_CFG table=filter:25 family=2 entries=1 op=nft_register_rule pid=1909 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 09:57:36.351000 audit[1909]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=240 a0=3 a1=fffff9dcb850 a2=0 a3=1 items=0 ppid=1834 pid=1909 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:57:36.351000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Feb 9 09:57:36.353000 audit[1911]: NETFILTER_CFG table=filter:26 family=2 entries=1 op=nft_register_rule pid=1911 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 09:57:36.353000 audit[1911]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=428 a0=3 a1=fffff8a55a00 a2=0 a3=1 items=0 ppid=1834 pid=1911 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:57:36.353000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D31002D6900646F636B6572300000002D6F00646F636B657230002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D32 Feb 9 09:57:36.355000 audit[1913]: NETFILTER_CFG table=filter:27 family=2 entries=1 op=nft_register_rule pid=1913 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 09:57:36.355000 audit[1913]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=312 a0=3 a1=ffffc03a2590 a2=0 a3=1 items=0 ppid=1834 pid=1913 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:57:36.355000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D32002D6F00646F636B657230002D6A0044524F50 Feb 9 09:57:36.356928 systemd-networkd[1590]: docker0: Link UP Feb 9 09:57:36.375000 audit[1917]: NETFILTER_CFG table=filter:28 family=2 entries=1 op=nft_unregister_rule pid=1917 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 09:57:36.375000 audit[1917]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=228 a0=3 a1=fffff28f3d30 a2=0 a3=1 items=0 ppid=1834 pid=1917 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:57:36.375000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4400464F5257415244002D6A00444F434B45522D55534552 Feb 9 09:57:36.376000 audit[1918]: NETFILTER_CFG table=filter:29 family=2 entries=1 op=nft_register_rule pid=1918 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 09:57:36.376000 audit[1918]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=224 a0=3 a1=ffffd419c8c0 a2=0 a3=1 items=0 ppid=1834 pid=1918 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:57:36.376000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Feb 9 09:57:36.378804 env[1834]: time="2024-02-09T09:57:36.378758851Z" level=info msg="Loading containers: done." Feb 9 09:57:36.390928 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck4057371854-merged.mount: Deactivated successfully. Feb 9 09:57:38.420626 env[1834]: time="2024-02-09T09:57:38.420572084Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Feb 9 09:57:38.421001 env[1834]: time="2024-02-09T09:57:38.420766654Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 Feb 9 09:57:38.421001 env[1834]: time="2024-02-09T09:57:38.420869179Z" level=info msg="Daemon has completed initialization" Feb 9 09:57:38.587354 kernel: kauditd_printk_skb: 83 callbacks suppressed Feb 9 09:57:38.587487 kernel: audit: type=1130 audit(1707472658.567:176): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=docker comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:57:38.567000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=docker comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:57:38.587617 env[1834]: time="2024-02-09T09:57:38.581768554Z" level=info msg="API listen on /run/docker.sock" Feb 9 09:57:38.569055 systemd[1]: Started docker.service. Feb 9 09:57:38.606703 systemd[1]: Reloading. Feb 9 09:57:38.670353 /usr/lib/systemd/system-generators/torcx-generator[1966]: time="2024-02-09T09:57:38Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 9 09:57:38.670719 /usr/lib/systemd/system-generators/torcx-generator[1966]: time="2024-02-09T09:57:38Z" level=info msg="torcx already run" Feb 9 09:57:38.751714 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 9 09:57:38.751735 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 9 09:57:38.768718 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 9 09:57:38.839000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:57:38.840613 systemd[1]: Started kubelet.service. Feb 9 09:57:38.859956 kernel: audit: type=1130 audit(1707472658.839:177): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:57:38.905957 kubelet[2032]: E0209 09:57:38.905902 2032 run.go:74] "command failed" err="failed to validate kubelet flags: the container runtime endpoint address was not specified or empty, use --container-runtime-endpoint to set" Feb 9 09:57:38.908443 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 9 09:57:38.908600 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 9 09:57:38.907000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Feb 9 09:57:38.927359 kernel: audit: type=1131 audit(1707472658.907:178): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Feb 9 09:57:40.825793 env[1448]: time="2024-02-09T09:57:40.825255513Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.26.13\"" Feb 9 09:57:42.326166 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount923559129.mount: Deactivated successfully. Feb 9 09:57:48.184646 env[1448]: time="2024-02-09T09:57:48.184593195Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:57:48.196233 env[1448]: time="2024-02-09T09:57:48.196171343Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:d88fbf485621d26e515136c1848b666d7dfe0fa84ca7ebd826447b039d306d88,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:57:48.200792 env[1448]: time="2024-02-09T09:57:48.200749352Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:57:48.205874 env[1448]: time="2024-02-09T09:57:48.205833820Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:2f28bed4096abd572a56595ac0304238bdc271dcfe22c650707c09bf97ec16fd,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:57:48.206703 env[1448]: time="2024-02-09T09:57:48.206674171Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.26.13\" returns image reference \"sha256:d88fbf485621d26e515136c1848b666d7dfe0fa84ca7ebd826447b039d306d88\"" Feb 9 09:57:48.215587 env[1448]: time="2024-02-09T09:57:48.215546979Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.26.13\"" Feb 9 09:57:49.009806 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Feb 9 09:57:49.009973 systemd[1]: Stopped kubelet.service. Feb 9 09:57:49.009000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:57:49.011579 systemd[1]: Started kubelet.service. Feb 9 09:57:49.009000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:57:49.046128 kernel: audit: type=1130 audit(1707472669.009:179): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:57:49.046255 kernel: audit: type=1131 audit(1707472669.009:180): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:57:49.046364 kernel: audit: type=1130 audit(1707472669.010:181): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:57:49.010000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:57:49.075763 kubelet[2058]: E0209 09:57:49.075673 2058 run.go:74] "command failed" err="failed to validate kubelet flags: the container runtime endpoint address was not specified or empty, use --container-runtime-endpoint to set" Feb 9 09:57:49.078141 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 9 09:57:49.078291 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 9 09:57:49.078000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Feb 9 09:57:49.097372 kernel: audit: type=1131 audit(1707472669.078:182): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Feb 9 09:57:52.317868 env[1448]: time="2024-02-09T09:57:52.317795632Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:57:52.324076 env[1448]: time="2024-02-09T09:57:52.324037320Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:71d8e883014e0849ca9a3161bd1feac09ad210dea2f4140732e218f04a6826c2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:57:52.329345 env[1448]: time="2024-02-09T09:57:52.329289694Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:57:52.333310 env[1448]: time="2024-02-09T09:57:52.333271507Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:fda420c6c15cdd01c4eba3404f0662fe486a9c7f38fa13c741a21334673841a2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:57:52.334073 env[1448]: time="2024-02-09T09:57:52.334043772Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.26.13\" returns image reference \"sha256:71d8e883014e0849ca9a3161bd1feac09ad210dea2f4140732e218f04a6826c2\"" Feb 9 09:57:52.343478 env[1448]: time="2024-02-09T09:57:52.343436885Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.26.13\"" Feb 9 09:57:53.418605 env[1448]: time="2024-02-09T09:57:53.418559235Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:57:53.424723 env[1448]: time="2024-02-09T09:57:53.424686753Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:a636f3d6300bad4775ea80ad544e38f486a039732c4871bddc1db3a5336c871a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:57:53.428864 env[1448]: time="2024-02-09T09:57:53.428833767Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:57:53.432994 env[1448]: time="2024-02-09T09:57:53.432943221Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:c3c7303ee6d01c8e5a769db28661cf854b55175aa72c67e9b6a7b9d47ac42af3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:57:53.433743 env[1448]: time="2024-02-09T09:57:53.433716086Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.26.13\" returns image reference \"sha256:a636f3d6300bad4775ea80ad544e38f486a039732c4871bddc1db3a5336c871a\"" Feb 9 09:57:53.442754 env[1448]: time="2024-02-09T09:57:53.442721177Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.26.13\"" Feb 9 09:57:54.381375 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1587934640.mount: Deactivated successfully. Feb 9 09:57:54.845902 env[1448]: time="2024-02-09T09:57:54.845845688Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:57:54.852637 env[1448]: time="2024-02-09T09:57:54.852594461Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:95874282cd4f2ad9bc384735e604f0380cff88d61a2ca9db65890e6d9df46926,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:57:54.856525 env[1448]: time="2024-02-09T09:57:54.856490984Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:57:54.859355 env[1448]: time="2024-02-09T09:57:54.859325634Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:f6e0de32a002b910b9b2e0e8d769e2d7b05208240559c745ce4781082ab15f22,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:57:54.859769 env[1448]: time="2024-02-09T09:57:54.859742527Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.26.13\" returns image reference \"sha256:95874282cd4f2ad9bc384735e604f0380cff88d61a2ca9db65890e6d9df46926\"" Feb 9 09:57:54.869078 env[1448]: time="2024-02-09T09:57:54.869026860Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Feb 9 09:57:55.457416 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1537366499.mount: Deactivated successfully. Feb 9 09:57:55.477506 env[1448]: time="2024-02-09T09:57:55.477448522Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:57:55.485553 env[1448]: time="2024-02-09T09:57:55.485502609Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:57:55.489870 env[1448]: time="2024-02-09T09:57:55.489832943Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:57:55.496026 env[1448]: time="2024-02-09T09:57:55.495991452Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:57:55.496651 env[1448]: time="2024-02-09T09:57:55.496622471Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\"" Feb 9 09:57:55.505599 env[1448]: time="2024-02-09T09:57:55.505559546Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.6-0\"" Feb 9 09:57:56.658660 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount882703572.mount: Deactivated successfully. Feb 9 09:57:59.258000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:57:59.259818 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Feb 9 09:57:59.259979 systemd[1]: Stopped kubelet.service. Feb 9 09:57:59.261586 systemd[1]: Started kubelet.service. Feb 9 09:57:59.258000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:57:59.298570 kernel: audit: type=1130 audit(1707472679.258:183): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:57:59.298651 kernel: audit: type=1131 audit(1707472679.258:184): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:57:59.298680 kernel: audit: type=1130 audit(1707472679.258:185): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:57:59.258000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:57:59.366158 kubelet[2087]: E0209 09:57:59.366108 2087 run.go:74] "command failed" err="failed to validate kubelet flags: the container runtime endpoint address was not specified or empty, use --container-runtime-endpoint to set" Feb 9 09:57:59.367000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Feb 9 09:57:59.368193 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 9 09:57:59.368358 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 9 09:57:59.387428 kernel: audit: type=1131 audit(1707472679.367:186): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Feb 9 09:57:59.566342 env[1448]: time="2024-02-09T09:57:59.565774369Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.6-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:57:59.572339 env[1448]: time="2024-02-09T09:57:59.572295270Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ef245802824036d4a23ba6f8b3f04c055416f9dc73a54d546b1f98ad16f6b8cb,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:57:59.577369 env[1448]: time="2024-02-09T09:57:59.577335730Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.6-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:57:59.582351 env[1448]: time="2024-02-09T09:57:59.582322189Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:dd75ec974b0a2a6f6bb47001ba09207976e625db898d1b16735528c009cb171c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:57:59.583024 env[1448]: time="2024-02-09T09:57:59.582992087Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.6-0\" returns image reference \"sha256:ef245802824036d4a23ba6f8b3f04c055416f9dc73a54d546b1f98ad16f6b8cb\"" Feb 9 09:57:59.591937 env[1448]: time="2024-02-09T09:57:59.591728810Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.9.3\"" Feb 9 09:58:00.367760 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount568393836.mount: Deactivated successfully. Feb 9 09:58:00.799812 env[1448]: time="2024-02-09T09:58:00.799750317Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.9.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:58:00.808791 env[1448]: time="2024-02-09T09:58:00.808743361Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:b19406328e70dd2f6a36d6dbe4e867b0684ced2fdeb2f02ecb54ead39ec0bac0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:58:00.814404 env[1448]: time="2024-02-09T09:58:00.814358553Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.9.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:58:00.819613 env[1448]: time="2024-02-09T09:58:00.819566814Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:8e352a029d304ca7431c6507b56800636c321cb52289686a581ab70aaa8a2e2a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:58:00.820091 env[1448]: time="2024-02-09T09:58:00.820042667Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.9.3\" returns image reference \"sha256:b19406328e70dd2f6a36d6dbe4e867b0684ced2fdeb2f02ecb54ead39ec0bac0\"" Feb 9 09:58:05.444000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:58:05.445752 systemd[1]: Stopped kubelet.service. Feb 9 09:58:05.444000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:58:05.484210 kernel: audit: type=1130 audit(1707472685.444:187): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:58:05.484335 kernel: audit: type=1131 audit(1707472685.444:188): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:58:05.493090 systemd[1]: Reloading. Feb 9 09:58:05.549142 /usr/lib/systemd/system-generators/torcx-generator[2176]: time="2024-02-09T09:58:05Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 9 09:58:05.549174 /usr/lib/systemd/system-generators/torcx-generator[2176]: time="2024-02-09T09:58:05Z" level=info msg="torcx already run" Feb 9 09:58:05.636876 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 9 09:58:05.637042 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 9 09:58:05.654087 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 9 09:58:05.742929 systemd[1]: Started kubelet.service. Feb 9 09:58:05.742000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:58:05.768919 kernel: audit: type=1130 audit(1707472685.742:189): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:58:05.818041 kubelet[2241]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI. Feb 9 09:58:05.818041 kubelet[2241]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 9 09:58:05.818402 kubelet[2241]: I0209 09:58:05.818085 2241 server.go:198] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 9 09:58:05.819461 kubelet[2241]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI. Feb 9 09:58:05.819461 kubelet[2241]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 9 09:58:06.961100 kubelet[2241]: I0209 09:58:06.961069 2241 server.go:412] "Kubelet version" kubeletVersion="v1.26.5" Feb 9 09:58:06.961718 kubelet[2241]: I0209 09:58:06.961702 2241 server.go:414] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 9 09:58:06.962002 kubelet[2241]: I0209 09:58:06.961986 2241 server.go:836] "Client rotation is on, will bootstrap in background" Feb 9 09:58:06.965126 kubelet[2241]: E0209 09:58:06.965087 2241 certificate_manager.go:471] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.200.20.12:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.200.20.12:6443: connect: connection refused Feb 9 09:58:06.965216 kubelet[2241]: I0209 09:58:06.965140 2241 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 9 09:58:06.967095 kubelet[2241]: W0209 09:58:06.967076 2241 machine.go:65] Cannot read vendor id correctly, set empty. Feb 9 09:58:06.967834 kubelet[2241]: I0209 09:58:06.967808 2241 server.go:659] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 9 09:58:06.968281 kubelet[2241]: I0209 09:58:06.968264 2241 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 9 09:58:06.968465 kubelet[2241]: I0209 09:58:06.968444 2241 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: KubeletOOMScoreAdj:-999 ContainerRuntime: CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:cgroupfs KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:nodefs.available Operator:LessThan Value:{Quantity: Percentage:0.1} GracePeriod:0s MinReclaim:} {Signal:nodefs.inodesFree Operator:LessThan Value:{Quantity: Percentage:0.05} GracePeriod:0s MinReclaim:} {Signal:imagefs.available Operator:LessThan Value:{Quantity: Percentage:0.15} GracePeriod:0s MinReclaim:} {Signal:memory.available Operator:LessThan Value:{Quantity:100Mi Percentage:0} GracePeriod:0s MinReclaim:}]} QOSReserved:map[] CPUManagerPolicy:none CPUManagerPolicyOptions:map[] ExperimentalTopologyManagerScope:container CPUManagerReconcilePeriod:10s ExperimentalMemoryManagerPolicy:None ExperimentalMemoryManagerReservedMemory:[] ExperimentalPodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms ExperimentalTopologyManagerPolicy:none ExperimentalTopologyManagerPolicyOptions:map[]} Feb 9 09:58:06.968645 kubelet[2241]: I0209 09:58:06.968631 2241 topology_manager.go:134] "Creating topology manager with policy per scope" topologyPolicyName="none" topologyScopeName="container" Feb 9 09:58:06.968712 kubelet[2241]: I0209 09:58:06.968703 2241 container_manager_linux.go:308] "Creating device plugin manager" Feb 9 09:58:06.968870 kubelet[2241]: I0209 09:58:06.968857 2241 state_mem.go:36] "Initialized new in-memory state store" Feb 9 09:58:06.975980 kubelet[2241]: I0209 09:58:06.975951 2241 kubelet.go:398] "Attempting to sync node with API server" Feb 9 09:58:06.975980 kubelet[2241]: I0209 09:58:06.975980 2241 kubelet.go:286] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 9 09:58:06.976127 kubelet[2241]: I0209 09:58:06.976007 2241 kubelet.go:297] "Adding apiserver pod source" Feb 9 09:58:06.976127 kubelet[2241]: I0209 09:58:06.976018 2241 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 9 09:58:06.976918 kubelet[2241]: W0209 09:58:06.976855 2241 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://10.200.20.12:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.2-a-d10cdd880c&limit=500&resourceVersion=0": dial tcp 10.200.20.12:6443: connect: connection refused Feb 9 09:58:06.976918 kubelet[2241]: E0209 09:58:06.976917 2241 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.200.20.12:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.2-a-d10cdd880c&limit=500&resourceVersion=0": dial tcp 10.200.20.12:6443: connect: connection refused Feb 9 09:58:06.977024 kubelet[2241]: I0209 09:58:06.976982 2241 kuberuntime_manager.go:244] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Feb 9 09:58:06.977295 kubelet[2241]: W0209 09:58:06.977266 2241 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Feb 9 09:58:06.977675 kubelet[2241]: I0209 09:58:06.977647 2241 server.go:1186] "Started kubelet" Feb 9 09:58:06.977000 audit[2241]: AVC avc: denied { mac_admin } for pid=2241 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 09:58:06.981667 kubelet[2241]: E0209 09:58:06.981645 2241 cri_stats_provider.go:455] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Feb 9 09:58:06.981782 kubelet[2241]: E0209 09:58:06.981770 2241 kubelet.go:1386] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 9 09:58:06.982097 kubelet[2241]: W0209 09:58:06.982059 2241 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://10.200.20.12:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.20.12:6443: connect: connection refused Feb 9 09:58:06.982194 kubelet[2241]: E0209 09:58:06.982183 2241 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.200.20.12:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.20.12:6443: connect: connection refused Feb 9 09:58:06.982428 kubelet[2241]: E0209 09:58:06.982317 2241 event.go:276] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ci-3510.3.2-a-d10cdd880c.17b2295cbafba0ed", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ci-3510.3.2-a-d10cdd880c", UID:"ci-3510.3.2-a-d10cdd880c", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"ci-3510.3.2-a-d10cdd880c"}, FirstTimestamp:time.Date(2024, time.February, 9, 9, 58, 6, 977622253, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 9, 58, 6, 977622253, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'Post "https://10.200.20.12:6443/api/v1/namespaces/default/events": dial tcp 10.200.20.12:6443: connect: connection refused'(may retry after sleeping) Feb 9 09:58:06.983765 kubelet[2241]: I0209 09:58:06.983743 2241 server.go:161] "Starting to listen" address="0.0.0.0" port=10250 Feb 9 09:58:06.984469 kubelet[2241]: I0209 09:58:06.984449 2241 server.go:451] "Adding debug handlers to kubelet server" Feb 9 09:58:06.985412 kubelet[2241]: I0209 09:58:06.985393 2241 kubelet.go:1341] "Unprivileged containerized plugins might not work, could not set selinux context on plugin registration dir" path="/var/lib/kubelet/plugins_registry" err="setxattr /var/lib/kubelet/plugins_registry: invalid argument" Feb 9 09:58:06.985549 kubelet[2241]: I0209 09:58:06.985534 2241 kubelet.go:1345] "Unprivileged containerized plugins might not work, could not set selinux context on plugins dir" path="/var/lib/kubelet/plugins" err="setxattr /var/lib/kubelet/plugins: invalid argument" Feb 9 09:58:06.985725 kubelet[2241]: I0209 09:58:06.985713 2241 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 9 09:58:06.988825 kubelet[2241]: I0209 09:58:06.988807 2241 volume_manager.go:293] "Starting Kubelet Volume Manager" Feb 9 09:58:06.988999 kubelet[2241]: I0209 09:58:06.988984 2241 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Feb 9 09:58:06.989471 kubelet[2241]: W0209 09:58:06.989433 2241 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://10.200.20.12:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.12:6443: connect: connection refused Feb 9 09:58:06.989602 kubelet[2241]: E0209 09:58:06.989585 2241 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.200.20.12:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.12:6443: connect: connection refused Feb 9 09:58:06.989899 kubelet[2241]: E0209 09:58:06.989875 2241 controller.go:146] failed to ensure lease exists, will retry in 200ms, error: Get "https://10.200.20.12:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.2-a-d10cdd880c?timeout=10s": dial tcp 10.200.20.12:6443: connect: connection refused Feb 9 09:58:06.977000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Feb 9 09:58:07.007660 kernel: audit: type=1400 audit(1707472686.977:190): avc: denied { mac_admin } for pid=2241 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 09:58:07.007780 kernel: audit: type=1401 audit(1707472686.977:190): op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Feb 9 09:58:06.977000 audit[2241]: SYSCALL arch=c00000b7 syscall=5 success=no exit=-22 a0=4000db1320 a1=4000a0e390 a2=4000db12f0 a3=25 items=0 ppid=1 pid=2241 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/opt/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:58:07.036791 kernel: audit: type=1300 audit(1707472686.977:190): arch=c00000b7 syscall=5 success=no exit=-22 a0=4000db1320 a1=4000a0e390 a2=4000db12f0 a3=25 items=0 ppid=1 pid=2241 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/opt/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:58:07.036919 kernel: audit: type=1327 audit(1707472686.977:190): proctitle=2F6F70742F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Feb 9 09:58:06.977000 audit: PROCTITLE proctitle=2F6F70742F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Feb 9 09:58:07.063432 kernel: audit: type=1400 audit(1707472686.983:191): avc: denied { mac_admin } for pid=2241 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 09:58:06.983000 audit[2241]: AVC avc: denied { mac_admin } for pid=2241 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 09:58:06.983000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Feb 9 09:58:07.092931 kernel: audit: type=1401 audit(1707472686.983:191): op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Feb 9 09:58:07.093179 kernel: audit: type=1300 audit(1707472686.983:191): arch=c00000b7 syscall=5 success=no exit=-22 a0=4000f08c80 a1=4000a0e3a8 a2=4000db13b0 a3=25 items=0 ppid=1 pid=2241 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/opt/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:58:06.983000 audit[2241]: SYSCALL arch=c00000b7 syscall=5 success=no exit=-22 a0=4000f08c80 a1=4000a0e3a8 a2=4000db13b0 a3=25 items=0 ppid=1 pid=2241 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/opt/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:58:06.983000 audit: PROCTITLE proctitle=2F6F70742F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Feb 9 09:58:07.041000 audit[2253]: NETFILTER_CFG table=mangle:30 family=2 entries=2 op=nft_register_chain pid=2253 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 09:58:07.041000 audit[2253]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=136 a0=3 a1=ffffe635a530 a2=0 a3=1 items=0 ppid=2241 pid=2253 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:58:07.041000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Feb 9 09:58:07.062000 audit[2255]: NETFILTER_CFG table=filter:31 family=2 entries=1 op=nft_register_chain pid=2255 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 09:58:07.062000 audit[2255]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffcfd515b0 a2=0 a3=1 items=0 ppid=2241 pid=2255 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:58:07.062000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Feb 9 09:58:07.082000 audit[2257]: NETFILTER_CFG table=filter:32 family=2 entries=2 op=nft_register_chain pid=2257 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 09:58:07.082000 audit[2257]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=312 a0=3 a1=ffffc1afb0d0 a2=0 a3=1 items=0 ppid=2241 pid=2257 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:58:07.082000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Feb 9 09:58:07.082000 audit[2259]: NETFILTER_CFG table=filter:33 family=2 entries=2 op=nft_register_chain pid=2259 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 09:58:07.082000 audit[2259]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=312 a0=3 a1=ffffe6009180 a2=0 a3=1 items=0 ppid=2241 pid=2259 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:58:07.082000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Feb 9 09:58:07.129000 audit[2263]: NETFILTER_CFG table=filter:34 family=2 entries=1 op=nft_register_rule pid=2263 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 09:58:07.129000 audit[2263]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=924 a0=3 a1=ffffdbd9ced0 a2=0 a3=1 items=0 ppid=2241 pid=2263 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:58:07.129000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E7400626C6F636B20696E636F6D696E67206C6F63616C6E657420636F6E6E656374696F6E73002D2D647374003132372E302E302E302F38 Feb 9 09:58:07.130000 audit[2264]: NETFILTER_CFG table=nat:35 family=2 entries=1 op=nft_register_chain pid=2264 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 09:58:07.130000 audit[2264]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=ffffc3d5b840 a2=0 a3=1 items=0 ppid=2241 pid=2264 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:58:07.130000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4D41524B2D44524F50002D74006E6174 Feb 9 09:58:07.143000 audit[2267]: NETFILTER_CFG table=nat:36 family=2 entries=1 op=nft_register_rule pid=2267 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 09:58:07.143000 audit[2267]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=216 a0=3 a1=fffff5731b80 a2=0 a3=1 items=0 ppid=2241 pid=2267 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:58:07.143000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4D41524B2D44524F50002D74006E6174002D6A004D41524B002D2D6F722D6D61726B0030783030303038303030 Feb 9 09:58:07.151000 audit[2270]: NETFILTER_CFG table=filter:37 family=2 entries=1 op=nft_register_rule pid=2270 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 09:58:07.151000 audit[2270]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=664 a0=3 a1=fffff3d562c0 a2=0 a3=1 items=0 ppid=2241 pid=2270 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:58:07.151000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206669726577616C6C20666F722064726F7070696E67206D61726B6564207061636B657473002D6D006D61726B Feb 9 09:58:07.152000 audit[2271]: NETFILTER_CFG table=nat:38 family=2 entries=1 op=nft_register_chain pid=2271 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 09:58:07.152000 audit[2271]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=ffffff773070 a2=0 a3=1 items=0 ppid=2241 pid=2271 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:58:07.152000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4D41524B2D4D415351002D74006E6174 Feb 9 09:58:07.153000 audit[2272]: NETFILTER_CFG table=nat:39 family=2 entries=1 op=nft_register_chain pid=2272 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 09:58:07.153000 audit[2272]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffff0f0f0d0 a2=0 a3=1 items=0 ppid=2241 pid=2272 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:58:07.153000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Feb 9 09:58:07.155000 audit[2274]: NETFILTER_CFG table=nat:40 family=2 entries=1 op=nft_register_rule pid=2274 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 09:58:07.155000 audit[2274]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=216 a0=3 a1=fffff1f2d830 a2=0 a3=1 items=0 ppid=2241 pid=2274 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:58:07.155000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4D41524B2D4D415351002D74006E6174002D6A004D41524B002D2D6F722D6D61726B0030783030303034303030 Feb 9 09:58:07.157000 audit[2276]: NETFILTER_CFG table=nat:41 family=2 entries=1 op=nft_register_rule pid=2276 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 09:58:07.157000 audit[2276]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=532 a0=3 a1=ffffe8f2b8b0 a2=0 a3=1 items=0 ppid=2241 pid=2276 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:58:07.157000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Feb 9 09:58:07.159000 audit[2278]: NETFILTER_CFG table=nat:42 family=2 entries=1 op=nft_register_rule pid=2278 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 09:58:07.159000 audit[2278]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=364 a0=3 a1=fffff0423730 a2=0 a3=1 items=0 ppid=2241 pid=2278 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:58:07.159000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D504F5354524F5554494E47002D74006E6174002D6D006D61726B0000002D2D6D61726B00307830303030343030302F30783030303034303030002D6A0052455455524E Feb 9 09:58:07.161000 audit[2280]: NETFILTER_CFG table=nat:43 family=2 entries=1 op=nft_register_rule pid=2280 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 09:58:07.161000 audit[2280]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=220 a0=3 a1=ffffc7f70e60 a2=0 a3=1 items=0 ppid=2241 pid=2280 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:58:07.161000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D504F5354524F5554494E47002D74006E6174002D6A004D41524B002D2D786F722D6D61726B0030783030303034303030 Feb 9 09:58:07.164000 audit[2282]: NETFILTER_CFG table=nat:44 family=2 entries=1 op=nft_register_rule pid=2282 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 09:58:07.164000 audit[2282]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=540 a0=3 a1=ffffe5d50510 a2=0 a3=1 items=0 ppid=2241 pid=2282 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:58:07.164000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732073657276696365207472616666696320726571756972696E6720534E4154002D6A004D415351554552414445 Feb 9 09:58:07.166077 kubelet[2241]: I0209 09:58:07.166051 2241 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv4 Feb 9 09:58:07.165000 audit[2283]: NETFILTER_CFG table=mangle:45 family=10 entries=2 op=nft_register_chain pid=2283 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 09:58:07.165000 audit[2283]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=136 a0=3 a1=ffffd2bceaa0 a2=0 a3=1 items=0 ppid=2241 pid=2283 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:58:07.165000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Feb 9 09:58:07.165000 audit[2284]: NETFILTER_CFG table=mangle:46 family=2 entries=1 op=nft_register_chain pid=2284 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 09:58:07.165000 audit[2284]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffff03a4a0 a2=0 a3=1 items=0 ppid=2241 pid=2284 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:58:07.165000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Feb 9 09:58:07.166000 audit[2285]: NETFILTER_CFG table=nat:47 family=10 entries=2 op=nft_register_chain pid=2285 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 09:58:07.166000 audit[2285]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=124 a0=3 a1=ffffc71fb1b0 a2=0 a3=1 items=0 ppid=2241 pid=2285 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:58:07.166000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4D41524B2D44524F50002D74006E6174 Feb 9 09:58:07.167000 audit[2286]: NETFILTER_CFG table=nat:48 family=2 entries=1 op=nft_register_chain pid=2286 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 09:58:07.167000 audit[2286]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffe9d9ffe0 a2=0 a3=1 items=0 ppid=2241 pid=2286 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:58:07.167000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Feb 9 09:58:07.168000 audit[2288]: NETFILTER_CFG table=filter:49 family=2 entries=1 op=nft_register_chain pid=2288 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 09:58:07.168000 audit[2288]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffcaa85cb0 a2=0 a3=1 items=0 ppid=2241 pid=2288 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:58:07.168000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Feb 9 09:58:07.169000 audit[2289]: NETFILTER_CFG table=nat:50 family=10 entries=1 op=nft_register_rule pid=2289 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 09:58:07.169000 audit[2289]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=216 a0=3 a1=ffffd4202460 a2=0 a3=1 items=0 ppid=2241 pid=2289 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:58:07.169000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D41004B5542452D4D41524B2D44524F50002D74006E6174002D6A004D41524B002D2D6F722D6D61726B0030783030303038303030 Feb 9 09:58:07.171000 audit[2290]: NETFILTER_CFG table=filter:51 family=10 entries=2 op=nft_register_chain pid=2290 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 09:58:07.171000 audit[2290]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=132 a0=3 a1=fffffa02add0 a2=0 a3=1 items=0 ppid=2241 pid=2290 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:58:07.171000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Feb 9 09:58:07.173000 audit[2292]: NETFILTER_CFG table=filter:52 family=10 entries=1 op=nft_register_rule pid=2292 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 09:58:07.173000 audit[2292]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=664 a0=3 a1=fffff9744b80 a2=0 a3=1 items=0 ppid=2241 pid=2292 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:58:07.173000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206669726577616C6C20666F722064726F7070696E67206D61726B6564207061636B657473002D6D006D61726B Feb 9 09:58:07.174000 audit[2293]: NETFILTER_CFG table=nat:53 family=10 entries=1 op=nft_register_chain pid=2293 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 09:58:07.174000 audit[2293]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=fffff5ec79f0 a2=0 a3=1 items=0 ppid=2241 pid=2293 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:58:07.174000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4D41524B2D4D415351002D74006E6174 Feb 9 09:58:07.175000 audit[2294]: NETFILTER_CFG table=nat:54 family=10 entries=1 op=nft_register_chain pid=2294 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 09:58:07.175000 audit[2294]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffd20b7e60 a2=0 a3=1 items=0 ppid=2241 pid=2294 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:58:07.175000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Feb 9 09:58:07.177000 audit[2296]: NETFILTER_CFG table=nat:55 family=10 entries=1 op=nft_register_rule pid=2296 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 09:58:07.177000 audit[2296]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=216 a0=3 a1=ffffe53170c0 a2=0 a3=1 items=0 ppid=2241 pid=2296 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:58:07.177000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D41004B5542452D4D41524B2D4D415351002D74006E6174002D6A004D41524B002D2D6F722D6D61726B0030783030303034303030 Feb 9 09:58:07.182437 kubelet[2241]: I0209 09:58:07.182410 2241 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510.3.2-a-d10cdd880c" Feb 9 09:58:07.182976 kubelet[2241]: E0209 09:58:07.182957 2241 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.200.20.12:6443/api/v1/nodes\": dial tcp 10.200.20.12:6443: connect: connection refused" node="ci-3510.3.2-a-d10cdd880c" Feb 9 09:58:07.181000 audit[2298]: NETFILTER_CFG table=nat:56 family=10 entries=2 op=nft_register_chain pid=2298 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 09:58:07.181000 audit[2298]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=612 a0=3 a1=ffffd5311320 a2=0 a3=1 items=0 ppid=2241 pid=2298 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:58:07.181000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Feb 9 09:58:07.183992 kubelet[2241]: I0209 09:58:07.183975 2241 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 9 09:58:07.184079 kubelet[2241]: I0209 09:58:07.184068 2241 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 9 09:58:07.184147 kubelet[2241]: I0209 09:58:07.184137 2241 state_mem.go:36] "Initialized new in-memory state store" Feb 9 09:58:07.184000 audit[2301]: NETFILTER_CFG table=nat:57 family=10 entries=1 op=nft_register_rule pid=2301 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 09:58:07.184000 audit[2301]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=364 a0=3 a1=ffffe3220c10 a2=0 a3=1 items=0 ppid=2241 pid=2301 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:58:07.184000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D41004B5542452D504F5354524F5554494E47002D74006E6174002D6D006D61726B0000002D2D6D61726B00307830303030343030302F30783030303034303030002D6A0052455455524E Feb 9 09:58:07.186000 audit[2303]: NETFILTER_CFG table=nat:58 family=10 entries=1 op=nft_register_rule pid=2303 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 09:58:07.186000 audit[2303]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=220 a0=3 a1=ffffd2d121d0 a2=0 a3=1 items=0 ppid=2241 pid=2303 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:58:07.186000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D41004B5542452D504F5354524F5554494E47002D74006E6174002D6A004D41524B002D2D786F722D6D61726B0030783030303034303030 Feb 9 09:58:07.189234 kubelet[2241]: I0209 09:58:07.189216 2241 policy_none.go:49] "None policy: Start" Feb 9 09:58:07.189922 kubelet[2241]: I0209 09:58:07.189889 2241 memory_manager.go:169] "Starting memorymanager" policy="None" Feb 9 09:58:07.189922 kubelet[2241]: I0209 09:58:07.189923 2241 state_mem.go:35] "Initializing new in-memory state store" Feb 9 09:58:07.191408 kubelet[2241]: E0209 09:58:07.191379 2241 controller.go:146] failed to ensure lease exists, will retry in 400ms, error: Get "https://10.200.20.12:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.2-a-d10cdd880c?timeout=10s": dial tcp 10.200.20.12:6443: connect: connection refused Feb 9 09:58:07.196000 audit[2305]: NETFILTER_CFG table=nat:59 family=10 entries=1 op=nft_register_rule pid=2305 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 09:58:07.196000 audit[2305]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=556 a0=3 a1=fffff175b7f0 a2=0 a3=1 items=0 ppid=2241 pid=2305 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:58:07.196000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D41004B5542452D504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732073657276696365207472616666696320726571756972696E6720534E4154002D6A004D415351554552414445 Feb 9 09:58:07.198522 kubelet[2241]: I0209 09:58:07.198506 2241 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv6 Feb 9 09:58:07.198642 kubelet[2241]: I0209 09:58:07.198631 2241 status_manager.go:176] "Starting to sync pod status with apiserver" Feb 9 09:58:07.198720 kubelet[2241]: I0209 09:58:07.198696 2241 manager.go:455] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 9 09:58:07.197000 audit[2241]: AVC avc: denied { mac_admin } for pid=2241 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 09:58:07.197000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Feb 9 09:58:07.197000 audit[2241]: SYSCALL arch=c00000b7 syscall=5 success=no exit=-22 a0=4000be5200 a1=40011e1b18 a2=4000be51d0 a3=25 items=0 ppid=1 pid=2241 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/opt/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:58:07.197000 audit: PROCTITLE proctitle=2F6F70742F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Feb 9 09:58:07.198884 kubelet[2241]: I0209 09:58:07.198776 2241 server.go:88] "Unprivileged containerized plugins might not work. Could not set selinux context on socket dir" path="/var/lib/kubelet/device-plugins/" err="setxattr /var/lib/kubelet/device-plugins/: invalid argument" Feb 9 09:58:07.198931 kubelet[2241]: I0209 09:58:07.198710 2241 kubelet.go:2113] "Starting kubelet main sync loop" Feb 9 09:58:07.199027 kubelet[2241]: E0209 09:58:07.199017 2241 kubelet.go:2137] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Feb 9 09:58:07.199080 kubelet[2241]: I0209 09:58:07.198936 2241 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 9 09:58:07.201035 kubelet[2241]: E0209 09:58:07.201007 2241 eviction_manager.go:261] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-3510.3.2-a-d10cdd880c\" not found" Feb 9 09:58:07.201393 kubelet[2241]: W0209 09:58:07.201207 2241 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://10.200.20.12:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.12:6443: connect: connection refused Feb 9 09:58:07.201393 kubelet[2241]: E0209 09:58:07.201352 2241 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.200.20.12:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.12:6443: connect: connection refused Feb 9 09:58:07.200000 audit[2306]: NETFILTER_CFG table=mangle:60 family=10 entries=1 op=nft_register_chain pid=2306 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 09:58:07.200000 audit[2306]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffd2931a60 a2=0 a3=1 items=0 ppid=2241 pid=2306 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:58:07.200000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Feb 9 09:58:07.201000 audit[2307]: NETFILTER_CFG table=nat:61 family=10 entries=1 op=nft_register_chain pid=2307 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 09:58:07.201000 audit[2307]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffff0e36db0 a2=0 a3=1 items=0 ppid=2241 pid=2307 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:58:07.201000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Feb 9 09:58:07.203000 audit[2308]: NETFILTER_CFG table=filter:62 family=10 entries=1 op=nft_register_chain pid=2308 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 09:58:07.203000 audit[2308]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffcbdcb190 a2=0 a3=1 items=0 ppid=2241 pid=2308 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:58:07.203000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Feb 9 09:58:07.300001 kubelet[2241]: I0209 09:58:07.299954 2241 topology_manager.go:210] "Topology Admit Handler" Feb 9 09:58:07.301666 kubelet[2241]: I0209 09:58:07.301643 2241 topology_manager.go:210] "Topology Admit Handler" Feb 9 09:58:07.303042 kubelet[2241]: I0209 09:58:07.303021 2241 topology_manager.go:210] "Topology Admit Handler" Feb 9 09:58:07.304712 kubelet[2241]: I0209 09:58:07.304676 2241 status_manager.go:698] "Failed to get status for pod" podUID=923d4745e636457e5df4c72352c958c7 pod="kube-system/kube-scheduler-ci-3510.3.2-a-d10cdd880c" err="Get \"https://10.200.20.12:6443/api/v1/namespaces/kube-system/pods/kube-scheduler-ci-3510.3.2-a-d10cdd880c\": dial tcp 10.200.20.12:6443: connect: connection refused" Feb 9 09:58:07.308281 kubelet[2241]: I0209 09:58:07.308248 2241 status_manager.go:698] "Failed to get status for pod" podUID=40ae64c2bb8fdd55d72ee06a3e227bed pod="kube-system/kube-apiserver-ci-3510.3.2-a-d10cdd880c" err="Get \"https://10.200.20.12:6443/api/v1/namespaces/kube-system/pods/kube-apiserver-ci-3510.3.2-a-d10cdd880c\": dial tcp 10.200.20.12:6443: connect: connection refused" Feb 9 09:58:07.308503 kubelet[2241]: I0209 09:58:07.308465 2241 status_manager.go:698] "Failed to get status for pod" podUID=08c387d5f401be4fd08dcaf53ffda019 pod="kube-system/kube-controller-manager-ci-3510.3.2-a-d10cdd880c" err="Get \"https://10.200.20.12:6443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ci-3510.3.2-a-d10cdd880c\": dial tcp 10.200.20.12:6443: connect: connection refused" Feb 9 09:58:07.385064 kubelet[2241]: I0209 09:58:07.385023 2241 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510.3.2-a-d10cdd880c" Feb 9 09:58:07.385572 kubelet[2241]: E0209 09:58:07.385540 2241 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.200.20.12:6443/api/v1/nodes\": dial tcp 10.200.20.12:6443: connect: connection refused" node="ci-3510.3.2-a-d10cdd880c" Feb 9 09:58:07.390021 kubelet[2241]: I0209 09:58:07.389995 2241 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/923d4745e636457e5df4c72352c958c7-kubeconfig\") pod \"kube-scheduler-ci-3510.3.2-a-d10cdd880c\" (UID: \"923d4745e636457e5df4c72352c958c7\") " pod="kube-system/kube-scheduler-ci-3510.3.2-a-d10cdd880c" Feb 9 09:58:07.390070 kubelet[2241]: I0209 09:58:07.390039 2241 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/08c387d5f401be4fd08dcaf53ffda019-ca-certs\") pod \"kube-controller-manager-ci-3510.3.2-a-d10cdd880c\" (UID: \"08c387d5f401be4fd08dcaf53ffda019\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-d10cdd880c" Feb 9 09:58:07.390100 kubelet[2241]: I0209 09:58:07.390071 2241 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/08c387d5f401be4fd08dcaf53ffda019-kubeconfig\") pod \"kube-controller-manager-ci-3510.3.2-a-d10cdd880c\" (UID: \"08c387d5f401be4fd08dcaf53ffda019\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-d10cdd880c" Feb 9 09:58:07.390100 kubelet[2241]: I0209 09:58:07.390091 2241 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/08c387d5f401be4fd08dcaf53ffda019-k8s-certs\") pod \"kube-controller-manager-ci-3510.3.2-a-d10cdd880c\" (UID: \"08c387d5f401be4fd08dcaf53ffda019\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-d10cdd880c" Feb 9 09:58:07.390145 kubelet[2241]: I0209 09:58:07.390114 2241 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/08c387d5f401be4fd08dcaf53ffda019-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3510.3.2-a-d10cdd880c\" (UID: \"08c387d5f401be4fd08dcaf53ffda019\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-d10cdd880c" Feb 9 09:58:07.390145 kubelet[2241]: I0209 09:58:07.390138 2241 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/40ae64c2bb8fdd55d72ee06a3e227bed-ca-certs\") pod \"kube-apiserver-ci-3510.3.2-a-d10cdd880c\" (UID: \"40ae64c2bb8fdd55d72ee06a3e227bed\") " pod="kube-system/kube-apiserver-ci-3510.3.2-a-d10cdd880c" Feb 9 09:58:07.390189 kubelet[2241]: I0209 09:58:07.390157 2241 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/40ae64c2bb8fdd55d72ee06a3e227bed-k8s-certs\") pod \"kube-apiserver-ci-3510.3.2-a-d10cdd880c\" (UID: \"40ae64c2bb8fdd55d72ee06a3e227bed\") " pod="kube-system/kube-apiserver-ci-3510.3.2-a-d10cdd880c" Feb 9 09:58:07.390189 kubelet[2241]: I0209 09:58:07.390182 2241 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/40ae64c2bb8fdd55d72ee06a3e227bed-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3510.3.2-a-d10cdd880c\" (UID: \"40ae64c2bb8fdd55d72ee06a3e227bed\") " pod="kube-system/kube-apiserver-ci-3510.3.2-a-d10cdd880c" Feb 9 09:58:07.390237 kubelet[2241]: I0209 09:58:07.390202 2241 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/08c387d5f401be4fd08dcaf53ffda019-flexvolume-dir\") pod \"kube-controller-manager-ci-3510.3.2-a-d10cdd880c\" (UID: \"08c387d5f401be4fd08dcaf53ffda019\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-d10cdd880c" Feb 9 09:58:07.592588 kubelet[2241]: E0209 09:58:07.592477 2241 controller.go:146] failed to ensure lease exists, will retry in 800ms, error: Get "https://10.200.20.12:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.2-a-d10cdd880c?timeout=10s": dial tcp 10.200.20.12:6443: connect: connection refused Feb 9 09:58:07.607619 env[1448]: time="2024-02-09T09:58:07.607333953Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3510.3.2-a-d10cdd880c,Uid:923d4745e636457e5df4c72352c958c7,Namespace:kube-system,Attempt:0,}" Feb 9 09:58:07.609730 env[1448]: time="2024-02-09T09:58:07.609694647Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3510.3.2-a-d10cdd880c,Uid:40ae64c2bb8fdd55d72ee06a3e227bed,Namespace:kube-system,Attempt:0,}" Feb 9 09:58:07.611763 env[1448]: time="2024-02-09T09:58:07.611725173Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3510.3.2-a-d10cdd880c,Uid:08c387d5f401be4fd08dcaf53ffda019,Namespace:kube-system,Attempt:0,}" Feb 9 09:58:07.787611 kubelet[2241]: I0209 09:58:07.787582 2241 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510.3.2-a-d10cdd880c" Feb 9 09:58:07.787960 kubelet[2241]: E0209 09:58:07.787940 2241 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.200.20.12:6443/api/v1/nodes\": dial tcp 10.200.20.12:6443: connect: connection refused" node="ci-3510.3.2-a-d10cdd880c" Feb 9 09:58:08.184888 kubelet[2241]: W0209 09:58:08.184811 2241 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://10.200.20.12:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.20.12:6443: connect: connection refused Feb 9 09:58:08.184888 kubelet[2241]: E0209 09:58:08.184865 2241 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.200.20.12:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.20.12:6443: connect: connection refused Feb 9 09:58:08.196700 kubelet[2241]: E0209 09:58:08.196604 2241 event.go:276] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ci-3510.3.2-a-d10cdd880c.17b2295cbafba0ed", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ci-3510.3.2-a-d10cdd880c", UID:"ci-3510.3.2-a-d10cdd880c", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"ci-3510.3.2-a-d10cdd880c"}, FirstTimestamp:time.Date(2024, time.February, 9, 9, 58, 6, 977622253, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 9, 58, 6, 977622253, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'Post "https://10.200.20.12:6443/api/v1/namespaces/default/events": dial tcp 10.200.20.12:6443: connect: connection refused'(may retry after sleeping) Feb 9 09:58:08.208164 kubelet[2241]: W0209 09:58:08.208108 2241 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://10.200.20.12:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.2-a-d10cdd880c&limit=500&resourceVersion=0": dial tcp 10.200.20.12:6443: connect: connection refused Feb 9 09:58:08.208164 kubelet[2241]: E0209 09:58:08.208167 2241 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.200.20.12:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.2-a-d10cdd880c&limit=500&resourceVersion=0": dial tcp 10.200.20.12:6443: connect: connection refused Feb 9 09:58:08.316212 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount648187685.mount: Deactivated successfully. Feb 9 09:58:08.335079 env[1448]: time="2024-02-09T09:58:08.335023022Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:58:08.353210 env[1448]: time="2024-02-09T09:58:08.353155708Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:58:08.360592 env[1448]: time="2024-02-09T09:58:08.360549193Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:58:08.361076 kubelet[2241]: W0209 09:58:08.361026 2241 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://10.200.20.12:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.12:6443: connect: connection refused Feb 9 09:58:08.361139 kubelet[2241]: E0209 09:58:08.361079 2241 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.200.20.12:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.12:6443: connect: connection refused Feb 9 09:58:08.363718 env[1448]: time="2024-02-09T09:58:08.363675703Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:58:08.370542 env[1448]: time="2024-02-09T09:58:08.370492615Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:58:08.377647 env[1448]: time="2024-02-09T09:58:08.377602934Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:58:08.383386 env[1448]: time="2024-02-09T09:58:08.383353502Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:58:08.386687 env[1448]: time="2024-02-09T09:58:08.386650976Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:58:08.393609 kubelet[2241]: E0209 09:58:08.393571 2241 controller.go:146] failed to ensure lease exists, will retry in 1.6s, error: Get "https://10.200.20.12:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.2-a-d10cdd880c?timeout=10s": dial tcp 10.200.20.12:6443: connect: connection refused Feb 9 09:58:08.394379 env[1448]: time="2024-02-09T09:58:08.394345028Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:58:08.401077 env[1448]: time="2024-02-09T09:58:08.401031457Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:58:08.410858 env[1448]: time="2024-02-09T09:58:08.410814556Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:58:08.434767 env[1448]: time="2024-02-09T09:58:08.434671409Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 09:58:08.434767 env[1448]: time="2024-02-09T09:58:08.434727810Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 09:58:08.434767 env[1448]: time="2024-02-09T09:58:08.434739011Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 09:58:08.435486 env[1448]: time="2024-02-09T09:58:08.435359424Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/49ceb362a18cc3f3ef562270e8857b10bd34efc875d2620845118ac8ec232372 pid=2316 runtime=io.containerd.runc.v2 Feb 9 09:58:08.436483 env[1448]: time="2024-02-09T09:58:08.436442769Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:58:08.489941 env[1448]: time="2024-02-09T09:58:08.489883803Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3510.3.2-a-d10cdd880c,Uid:40ae64c2bb8fdd55d72ee06a3e227bed,Namespace:kube-system,Attempt:0,} returns sandbox id \"49ceb362a18cc3f3ef562270e8857b10bd34efc875d2620845118ac8ec232372\"" Feb 9 09:58:08.493650 env[1448]: time="2024-02-09T09:58:08.493594606Z" level=info msg="CreateContainer within sandbox \"49ceb362a18cc3f3ef562270e8857b10bd34efc875d2620845118ac8ec232372\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Feb 9 09:58:08.502575 env[1448]: time="2024-02-09T09:58:08.502502525Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 09:58:08.502691 env[1448]: time="2024-02-09T09:58:08.502582966Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 09:58:08.502691 env[1448]: time="2024-02-09T09:58:08.502612367Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 09:58:08.502846 env[1448]: time="2024-02-09T09:58:08.502789771Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/859097416cf82fa82f3d2e1a6aab33e44b34e208567dc3858f711bf79ff60c99 pid=2359 runtime=io.containerd.runc.v2 Feb 9 09:58:08.506140 env[1448]: time="2024-02-09T09:58:08.506063324Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 09:58:08.506140 env[1448]: time="2024-02-09T09:58:08.506101765Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 09:58:08.506140 env[1448]: time="2024-02-09T09:58:08.506113165Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 09:58:08.506755 env[1448]: time="2024-02-09T09:58:08.506685778Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/01bf620701524c7719bd56aca3c19fdb1daad95a660652ac8d7d193ebf1637da pid=2369 runtime=io.containerd.runc.v2 Feb 9 09:58:08.544650 env[1448]: time="2024-02-09T09:58:08.544587145Z" level=info msg="CreateContainer within sandbox \"49ceb362a18cc3f3ef562270e8857b10bd34efc875d2620845118ac8ec232372\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"4fb819394ba109d8e5f42dfb3e532fb2fda92f0dfc01db81dfabf485c4c55ecd\"" Feb 9 09:58:08.546225 env[1448]: time="2024-02-09T09:58:08.545290961Z" level=info msg="StartContainer for \"4fb819394ba109d8e5f42dfb3e532fb2fda92f0dfc01db81dfabf485c4c55ecd\"" Feb 9 09:58:08.579805 env[1448]: time="2024-02-09T09:58:08.579752611Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3510.3.2-a-d10cdd880c,Uid:923d4745e636457e5df4c72352c958c7,Namespace:kube-system,Attempt:0,} returns sandbox id \"859097416cf82fa82f3d2e1a6aab33e44b34e208567dc3858f711bf79ff60c99\"" Feb 9 09:58:08.584284 env[1448]: time="2024-02-09T09:58:08.584232831Z" level=info msg="CreateContainer within sandbox \"859097416cf82fa82f3d2e1a6aab33e44b34e208567dc3858f711bf79ff60c99\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Feb 9 09:58:08.591611 env[1448]: time="2024-02-09T09:58:08.591573115Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3510.3.2-a-d10cdd880c,Uid:08c387d5f401be4fd08dcaf53ffda019,Namespace:kube-system,Attempt:0,} returns sandbox id \"01bf620701524c7719bd56aca3c19fdb1daad95a660652ac8d7d193ebf1637da\"" Feb 9 09:58:08.592033 kubelet[2241]: I0209 09:58:08.592001 2241 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510.3.2-a-d10cdd880c" Feb 9 09:58:08.592392 kubelet[2241]: E0209 09:58:08.592369 2241 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.200.20.12:6443/api/v1/nodes\": dial tcp 10.200.20.12:6443: connect: connection refused" node="ci-3510.3.2-a-d10cdd880c" Feb 9 09:58:08.594324 env[1448]: time="2024-02-09T09:58:08.594273735Z" level=info msg="CreateContainer within sandbox \"01bf620701524c7719bd56aca3c19fdb1daad95a660652ac8d7d193ebf1637da\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Feb 9 09:58:08.640606 env[1448]: time="2024-02-09T09:58:08.640557129Z" level=info msg="CreateContainer within sandbox \"859097416cf82fa82f3d2e1a6aab33e44b34e208567dc3858f711bf79ff60c99\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"07841ce53cb4bdf8cdcaddab940136289e0bd0d7a88f1bd803d09fe743b67dbd\"" Feb 9 09:58:08.641838 env[1448]: time="2024-02-09T09:58:08.641798437Z" level=info msg="StartContainer for \"07841ce53cb4bdf8cdcaddab940136289e0bd0d7a88f1bd803d09fe743b67dbd\"" Feb 9 09:58:08.661686 env[1448]: time="2024-02-09T09:58:08.661523838Z" level=info msg="StartContainer for \"4fb819394ba109d8e5f42dfb3e532fb2fda92f0dfc01db81dfabf485c4c55ecd\" returns successfully" Feb 9 09:58:08.665345 env[1448]: time="2024-02-09T09:58:08.665028196Z" level=info msg="CreateContainer within sandbox \"01bf620701524c7719bd56aca3c19fdb1daad95a660652ac8d7d193ebf1637da\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"934a647092a60a11890b9fb0782bfcf7c082f38eb0f49203f48af6918f89e175\"" Feb 9 09:58:08.665894 env[1448]: time="2024-02-09T09:58:08.665671851Z" level=info msg="StartContainer for \"934a647092a60a11890b9fb0782bfcf7c082f38eb0f49203f48af6918f89e175\"" Feb 9 09:58:08.695913 kubelet[2241]: W0209 09:58:08.695805 2241 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://10.200.20.12:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.12:6443: connect: connection refused Feb 9 09:58:08.695913 kubelet[2241]: E0209 09:58:08.695855 2241 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.200.20.12:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.12:6443: connect: connection refused Feb 9 09:58:08.746349 env[1448]: time="2024-02-09T09:58:08.746271691Z" level=info msg="StartContainer for \"07841ce53cb4bdf8cdcaddab940136289e0bd0d7a88f1bd803d09fe743b67dbd\" returns successfully" Feb 9 09:58:08.788908 env[1448]: time="2024-02-09T09:58:08.788846283Z" level=info msg="StartContainer for \"934a647092a60a11890b9fb0782bfcf7c082f38eb0f49203f48af6918f89e175\" returns successfully" Feb 9 09:58:09.311055 systemd[1]: run-containerd-runc-k8s.io-49ceb362a18cc3f3ef562270e8857b10bd34efc875d2620845118ac8ec232372-runc.YTTvJK.mount: Deactivated successfully. Feb 9 09:58:10.193941 kubelet[2241]: I0209 09:58:10.193902 2241 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510.3.2-a-d10cdd880c" Feb 9 09:58:12.063695 kubelet[2241]: E0209 09:58:12.063665 2241 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-3510.3.2-a-d10cdd880c\" not found" node="ci-3510.3.2-a-d10cdd880c" Feb 9 09:58:12.242604 kubelet[2241]: I0209 09:58:12.242565 2241 kubelet_node_status.go:73] "Successfully registered node" node="ci-3510.3.2-a-d10cdd880c" Feb 9 09:58:12.983344 kubelet[2241]: I0209 09:58:12.983288 2241 apiserver.go:52] "Watching apiserver" Feb 9 09:58:12.989647 kubelet[2241]: I0209 09:58:12.989605 2241 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Feb 9 09:58:13.015973 kubelet[2241]: I0209 09:58:13.015929 2241 reconciler.go:41] "Reconciler: start to sync state" Feb 9 09:58:14.858871 systemd[1]: Reloading. Feb 9 09:58:14.923121 /usr/lib/systemd/system-generators/torcx-generator[2564]: time="2024-02-09T09:58:14Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 9 09:58:14.923154 /usr/lib/systemd/system-generators/torcx-generator[2564]: time="2024-02-09T09:58:14Z" level=info msg="torcx already run" Feb 9 09:58:14.999535 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 9 09:58:14.999554 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 9 09:58:15.016671 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 9 09:58:15.102732 kubelet[2241]: I0209 09:58:15.102617 2241 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 9 09:58:15.102964 systemd[1]: Stopping kubelet.service... Feb 9 09:58:15.122061 systemd[1]: kubelet.service: Deactivated successfully. Feb 9 09:58:15.122456 systemd[1]: Stopped kubelet.service. Feb 9 09:58:15.131523 kernel: kauditd_printk_skb: 104 callbacks suppressed Feb 9 09:58:15.131620 kernel: audit: type=1131 audit(1707472695.121:226): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:58:15.121000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:58:15.128703 systemd[1]: Started kubelet.service. Feb 9 09:58:15.127000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:58:15.166962 kernel: audit: type=1130 audit(1707472695.127:227): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:58:15.203823 kubelet[2631]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI. Feb 9 09:58:15.204394 kubelet[2631]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 9 09:58:15.204648 kubelet[2631]: I0209 09:58:15.204579 2631 server.go:198] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 9 09:58:15.206299 kubelet[2631]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI. Feb 9 09:58:15.206407 kubelet[2631]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 9 09:58:15.210283 kubelet[2631]: I0209 09:58:15.210217 2631 server.go:412] "Kubelet version" kubeletVersion="v1.26.5" Feb 9 09:58:15.210434 kubelet[2631]: I0209 09:58:15.210420 2631 server.go:414] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 9 09:58:15.210751 kubelet[2631]: I0209 09:58:15.210731 2631 server.go:836] "Client rotation is on, will bootstrap in background" Feb 9 09:58:15.212382 kubelet[2631]: I0209 09:58:15.212363 2631 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Feb 9 09:58:15.214598 kubelet[2631]: I0209 09:58:15.214571 2631 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 9 09:58:15.216071 kubelet[2631]: W0209 09:58:15.216054 2631 machine.go:65] Cannot read vendor id correctly, set empty. Feb 9 09:58:15.216825 kubelet[2631]: I0209 09:58:15.216790 2631 server.go:659] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 9 09:58:15.217331 kubelet[2631]: I0209 09:58:15.217289 2631 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 9 09:58:15.217471 kubelet[2631]: I0209 09:58:15.217456 2631 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: KubeletOOMScoreAdj:-999 ContainerRuntime: CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:cgroupfs KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:memory.available Operator:LessThan Value:{Quantity:100Mi Percentage:0} GracePeriod:0s MinReclaim:} {Signal:nodefs.available Operator:LessThan Value:{Quantity: Percentage:0.1} GracePeriod:0s MinReclaim:} {Signal:nodefs.inodesFree Operator:LessThan Value:{Quantity: Percentage:0.05} GracePeriod:0s MinReclaim:} {Signal:imagefs.available Operator:LessThan Value:{Quantity: Percentage:0.15} GracePeriod:0s MinReclaim:}]} QOSReserved:map[] CPUManagerPolicy:none CPUManagerPolicyOptions:map[] ExperimentalTopologyManagerScope:container CPUManagerReconcilePeriod:10s ExperimentalMemoryManagerPolicy:None ExperimentalMemoryManagerReservedMemory:[] ExperimentalPodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms ExperimentalTopologyManagerPolicy:none ExperimentalTopologyManagerPolicyOptions:map[]} Feb 9 09:58:15.217609 kubelet[2631]: I0209 09:58:15.217596 2631 topology_manager.go:134] "Creating topology manager with policy per scope" topologyPolicyName="none" topologyScopeName="container" Feb 9 09:58:15.217681 kubelet[2631]: I0209 09:58:15.217671 2631 container_manager_linux.go:308] "Creating device plugin manager" Feb 9 09:58:15.217758 kubelet[2631]: I0209 09:58:15.217748 2631 state_mem.go:36] "Initialized new in-memory state store" Feb 9 09:58:15.228794 kubelet[2631]: I0209 09:58:15.228764 2631 kubelet.go:398] "Attempting to sync node with API server" Feb 9 09:58:15.228794 kubelet[2631]: I0209 09:58:15.228790 2631 kubelet.go:286] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 9 09:58:15.228942 kubelet[2631]: I0209 09:58:15.228815 2631 kubelet.go:297] "Adding apiserver pod source" Feb 9 09:58:15.228942 kubelet[2631]: I0209 09:58:15.228827 2631 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 9 09:58:15.229912 kubelet[2631]: I0209 09:58:15.229896 2631 kuberuntime_manager.go:244] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Feb 9 09:58:15.230443 kubelet[2631]: I0209 09:58:15.230425 2631 server.go:1186] "Started kubelet" Feb 9 09:58:15.232156 kubelet[2631]: I0209 09:58:15.232133 2631 kubelet.go:1341] "Unprivileged containerized plugins might not work, could not set selinux context on plugin registration dir" path="/var/lib/kubelet/plugins_registry" err="setxattr /var/lib/kubelet/plugins_registry: invalid argument" Feb 9 09:58:15.232265 kubelet[2631]: I0209 09:58:15.232254 2631 kubelet.go:1345] "Unprivileged containerized plugins might not work, could not set selinux context on plugins dir" path="/var/lib/kubelet/plugins" err="setxattr /var/lib/kubelet/plugins: invalid argument" Feb 9 09:58:15.232349 kubelet[2631]: I0209 09:58:15.232339 2631 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 9 09:58:15.230000 audit[2631]: AVC avc: denied { mac_admin } for pid=2631 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 09:58:15.245126 kubelet[2631]: E0209 09:58:15.245104 2631 cri_stats_provider.go:455] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Feb 9 09:58:15.245274 kubelet[2631]: E0209 09:58:15.245264 2631 kubelet.go:1386] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 9 09:58:15.247705 kubelet[2631]: I0209 09:58:15.247686 2631 server.go:161] "Starting to listen" address="0.0.0.0" port=10250 Feb 9 09:58:15.248421 kubelet[2631]: I0209 09:58:15.248407 2631 server.go:451] "Adding debug handlers to kubelet server" Feb 9 09:58:15.251194 kubelet[2631]: I0209 09:58:15.251180 2631 volume_manager.go:293] "Starting Kubelet Volume Manager" Feb 9 09:58:15.251390 kubelet[2631]: I0209 09:58:15.251378 2631 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Feb 9 09:58:15.230000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Feb 9 09:58:15.291953 kernel: audit: type=1400 audit(1707472695.230:228): avc: denied { mac_admin } for pid=2631 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 09:58:15.292048 kernel: audit: type=1401 audit(1707472695.230:228): op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Feb 9 09:58:15.230000 audit[2631]: SYSCALL arch=c00000b7 syscall=5 success=no exit=-22 a0=4000bbd740 a1=4000c2e888 a2=4000bbd710 a3=25 items=0 ppid=1 pid=2631 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/opt/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:58:15.329523 kernel: audit: type=1300 audit(1707472695.230:228): arch=c00000b7 syscall=5 success=no exit=-22 a0=4000bbd740 a1=4000c2e888 a2=4000bbd710 a3=25 items=0 ppid=1 pid=2631 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/opt/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:58:15.230000 audit: PROCTITLE proctitle=2F6F70742F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Feb 9 09:58:15.358323 kernel: audit: type=1327 audit(1707472695.230:228): proctitle=2F6F70742F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Feb 9 09:58:15.358457 kernel: audit: type=1400 audit(1707472695.230:229): avc: denied { mac_admin } for pid=2631 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 09:58:15.230000 audit[2631]: AVC avc: denied { mac_admin } for pid=2631 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 09:58:15.230000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Feb 9 09:58:15.388608 kubelet[2631]: I0209 09:58:15.388582 2631 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510.3.2-a-d10cdd880c" Feb 9 09:58:15.392836 kernel: audit: type=1401 audit(1707472695.230:229): op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Feb 9 09:58:15.230000 audit[2631]: SYSCALL arch=c00000b7 syscall=5 success=no exit=-22 a0=4000baf1e0 a1=4000c2e8a0 a2=4000bbd800 a3=25 items=0 ppid=1 pid=2631 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/opt/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:58:15.423345 kernel: audit: type=1300 audit(1707472695.230:229): arch=c00000b7 syscall=5 success=no exit=-22 a0=4000baf1e0 a1=4000c2e8a0 a2=4000bbd800 a3=25 items=0 ppid=1 pid=2631 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/opt/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:58:15.432179 kubelet[2631]: I0209 09:58:15.432152 2631 kubelet_node_status.go:108] "Node was previously registered" node="ci-3510.3.2-a-d10cdd880c" Feb 9 09:58:15.432445 kubelet[2631]: I0209 09:58:15.432431 2631 kubelet_node_status.go:73] "Successfully registered node" node="ci-3510.3.2-a-d10cdd880c" Feb 9 09:58:15.439955 kubelet[2631]: I0209 09:58:15.439739 2631 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv4 Feb 9 09:58:15.230000 audit: PROCTITLE proctitle=2F6F70742F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Feb 9 09:58:15.465245 kernel: audit: type=1327 audit(1707472695.230:229): proctitle=2F6F70742F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Feb 9 09:58:15.538391 kubelet[2631]: I0209 09:58:15.538364 2631 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv6 Feb 9 09:58:15.538547 kubelet[2631]: I0209 09:58:15.538537 2631 status_manager.go:176] "Starting to sync pod status with apiserver" Feb 9 09:58:15.538612 kubelet[2631]: I0209 09:58:15.538603 2631 kubelet.go:2113] "Starting kubelet main sync loop" Feb 9 09:58:15.538704 kubelet[2631]: E0209 09:58:15.538695 2631 kubelet.go:2137] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 9 09:58:15.578113 kubelet[2631]: I0209 09:58:15.578085 2631 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 9 09:58:15.578273 kubelet[2631]: I0209 09:58:15.578262 2631 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 9 09:58:15.578375 kubelet[2631]: I0209 09:58:15.578364 2631 state_mem.go:36] "Initialized new in-memory state store" Feb 9 09:58:15.578559 kubelet[2631]: I0209 09:58:15.578547 2631 state_mem.go:88] "Updated default CPUSet" cpuSet="" Feb 9 09:58:15.578626 kubelet[2631]: I0209 09:58:15.578616 2631 state_mem.go:96] "Updated CPUSet assignments" assignments=map[] Feb 9 09:58:15.578695 kubelet[2631]: I0209 09:58:15.578686 2631 policy_none.go:49] "None policy: Start" Feb 9 09:58:15.590317 kubelet[2631]: I0209 09:58:15.588823 2631 memory_manager.go:169] "Starting memorymanager" policy="None" Feb 9 09:58:15.590317 kubelet[2631]: I0209 09:58:15.588870 2631 state_mem.go:35] "Initializing new in-memory state store" Feb 9 09:58:15.590317 kubelet[2631]: I0209 09:58:15.589213 2631 state_mem.go:75] "Updated machine memory state" Feb 9 09:58:15.589000 audit[2631]: AVC avc: denied { mac_admin } for pid=2631 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 09:58:15.589000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Feb 9 09:58:15.589000 audit[2631]: SYSCALL arch=c00000b7 syscall=5 success=no exit=-22 a0=400162c4e0 a1=400162a678 a2=400162c4b0 a3=25 items=0 ppid=1 pid=2631 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/opt/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:58:15.589000 audit: PROCTITLE proctitle=2F6F70742F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Feb 9 09:58:15.590671 kubelet[2631]: I0209 09:58:15.590445 2631 manager.go:455] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 9 09:58:15.590671 kubelet[2631]: I0209 09:58:15.590613 2631 server.go:88] "Unprivileged containerized plugins might not work. Could not set selinux context on socket dir" path="/var/lib/kubelet/device-plugins/" err="setxattr /var/lib/kubelet/device-plugins/: invalid argument" Feb 9 09:58:15.591114 kubelet[2631]: I0209 09:58:15.591076 2631 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 9 09:58:15.639906 kubelet[2631]: I0209 09:58:15.639788 2631 topology_manager.go:210] "Topology Admit Handler" Feb 9 09:58:15.640033 kubelet[2631]: I0209 09:58:15.639909 2631 topology_manager.go:210] "Topology Admit Handler" Feb 9 09:58:15.640033 kubelet[2631]: I0209 09:58:15.639956 2631 topology_manager.go:210] "Topology Admit Handler" Feb 9 09:58:15.669508 kubelet[2631]: I0209 09:58:15.669455 2631 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/08c387d5f401be4fd08dcaf53ffda019-kubeconfig\") pod \"kube-controller-manager-ci-3510.3.2-a-d10cdd880c\" (UID: \"08c387d5f401be4fd08dcaf53ffda019\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-d10cdd880c" Feb 9 09:58:15.669661 kubelet[2631]: I0209 09:58:15.669531 2631 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/08c387d5f401be4fd08dcaf53ffda019-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3510.3.2-a-d10cdd880c\" (UID: \"08c387d5f401be4fd08dcaf53ffda019\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-d10cdd880c" Feb 9 09:58:15.669832 kubelet[2631]: I0209 09:58:15.669806 2631 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/923d4745e636457e5df4c72352c958c7-kubeconfig\") pod \"kube-scheduler-ci-3510.3.2-a-d10cdd880c\" (UID: \"923d4745e636457e5df4c72352c958c7\") " pod="kube-system/kube-scheduler-ci-3510.3.2-a-d10cdd880c" Feb 9 09:58:15.669873 kubelet[2631]: I0209 09:58:15.669839 2631 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/40ae64c2bb8fdd55d72ee06a3e227bed-k8s-certs\") pod \"kube-apiserver-ci-3510.3.2-a-d10cdd880c\" (UID: \"40ae64c2bb8fdd55d72ee06a3e227bed\") " pod="kube-system/kube-apiserver-ci-3510.3.2-a-d10cdd880c" Feb 9 09:58:15.669966 kubelet[2631]: I0209 09:58:15.669943 2631 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/40ae64c2bb8fdd55d72ee06a3e227bed-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3510.3.2-a-d10cdd880c\" (UID: \"40ae64c2bb8fdd55d72ee06a3e227bed\") " pod="kube-system/kube-apiserver-ci-3510.3.2-a-d10cdd880c" Feb 9 09:58:15.670008 kubelet[2631]: I0209 09:58:15.669977 2631 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/08c387d5f401be4fd08dcaf53ffda019-ca-certs\") pod \"kube-controller-manager-ci-3510.3.2-a-d10cdd880c\" (UID: \"08c387d5f401be4fd08dcaf53ffda019\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-d10cdd880c" Feb 9 09:58:15.670034 kubelet[2631]: I0209 09:58:15.670016 2631 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/08c387d5f401be4fd08dcaf53ffda019-flexvolume-dir\") pod \"kube-controller-manager-ci-3510.3.2-a-d10cdd880c\" (UID: \"08c387d5f401be4fd08dcaf53ffda019\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-d10cdd880c" Feb 9 09:58:15.670060 kubelet[2631]: I0209 09:58:15.670037 2631 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/08c387d5f401be4fd08dcaf53ffda019-k8s-certs\") pod \"kube-controller-manager-ci-3510.3.2-a-d10cdd880c\" (UID: \"08c387d5f401be4fd08dcaf53ffda019\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-d10cdd880c" Feb 9 09:58:15.670085 kubelet[2631]: I0209 09:58:15.670058 2631 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/40ae64c2bb8fdd55d72ee06a3e227bed-ca-certs\") pod \"kube-apiserver-ci-3510.3.2-a-d10cdd880c\" (UID: \"40ae64c2bb8fdd55d72ee06a3e227bed\") " pod="kube-system/kube-apiserver-ci-3510.3.2-a-d10cdd880c" Feb 9 09:58:16.243758 kubelet[2631]: I0209 09:58:16.243719 2631 apiserver.go:52] "Watching apiserver" Feb 9 09:58:16.251868 kubelet[2631]: I0209 09:58:16.251832 2631 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Feb 9 09:58:16.273336 kubelet[2631]: I0209 09:58:16.273288 2631 reconciler.go:41] "Reconciler: start to sync state" Feb 9 09:58:16.654096 kubelet[2631]: E0209 09:58:16.654056 2631 kubelet.go:1802] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-3510.3.2-a-d10cdd880c\" already exists" pod="kube-system/kube-apiserver-ci-3510.3.2-a-d10cdd880c" Feb 9 09:58:16.843946 kubelet[2631]: E0209 09:58:16.843914 2631 kubelet.go:1802] "Failed creating a mirror pod for" err="pods \"kube-scheduler-ci-3510.3.2-a-d10cdd880c\" already exists" pod="kube-system/kube-scheduler-ci-3510.3.2-a-d10cdd880c" Feb 9 09:58:17.435289 kubelet[2631]: I0209 09:58:17.435253 2631 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-3510.3.2-a-d10cdd880c" podStartSLOduration=2.435205061 pod.CreationTimestamp="2024-02-09 09:58:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 09:58:17.434361765 +0000 UTC m=+2.298546091" watchObservedRunningTime="2024-02-09 09:58:17.435205061 +0000 UTC m=+2.299389387" Feb 9 09:58:17.435825 kubelet[2631]: I0209 09:58:17.435811 2631 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-3510.3.2-a-d10cdd880c" podStartSLOduration=2.435787151 pod.CreationTimestamp="2024-02-09 09:58:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 09:58:17.135582072 +0000 UTC m=+1.999766398" watchObservedRunningTime="2024-02-09 09:58:17.435787151 +0000 UTC m=+2.299971477" Feb 9 09:58:18.801507 sudo[1819]: pam_unix(sudo:session): session closed for user root Feb 9 09:58:18.801000 audit[1819]: USER_END pid=1819 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Feb 9 09:58:18.801000 audit[1819]: CRED_DISP pid=1819 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Feb 9 09:58:18.885990 sshd[1815]: pam_unix(sshd:session): session closed for user core Feb 9 09:58:18.886000 audit[1815]: USER_END pid=1815 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 09:58:18.887000 audit[1815]: CRED_DISP pid=1815 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 09:58:18.888808 systemd[1]: sshd@6-10.200.20.12:22-10.200.12.6:43940.service: Deactivated successfully. Feb 9 09:58:18.888000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.200.20.12:22-10.200.12.6:43940 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:58:18.890163 systemd[1]: session-9.scope: Deactivated successfully. Feb 9 09:58:18.890769 systemd-logind[1429]: Session 9 logged out. Waiting for processes to exit. Feb 9 09:58:18.891626 systemd-logind[1429]: Removed session 9. Feb 9 09:58:20.971056 kubelet[2631]: I0209 09:58:20.970890 2631 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-3510.3.2-a-d10cdd880c" podStartSLOduration=5.970853868 pod.CreationTimestamp="2024-02-09 09:58:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 09:58:17.884692944 +0000 UTC m=+2.748877230" watchObservedRunningTime="2024-02-09 09:58:20.970853868 +0000 UTC m=+5.835038234" Feb 9 09:58:29.854952 kubelet[2631]: I0209 09:58:29.854922 2631 kuberuntime_manager.go:1114] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Feb 9 09:58:29.855818 env[1448]: time="2024-02-09T09:58:29.855764497Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Feb 9 09:58:29.856365 kubelet[2631]: I0209 09:58:29.856345 2631 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Feb 9 09:58:30.616674 kubelet[2631]: I0209 09:58:30.616632 2631 topology_manager.go:210] "Topology Admit Handler" Feb 9 09:58:30.641716 kubelet[2631]: I0209 09:58:30.641677 2631 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/7c8076d0-81d1-4691-a66e-4dbc3ab16954-kube-proxy\") pod \"kube-proxy-hqjk5\" (UID: \"7c8076d0-81d1-4691-a66e-4dbc3ab16954\") " pod="kube-system/kube-proxy-hqjk5" Feb 9 09:58:30.641936 kubelet[2631]: I0209 09:58:30.641922 2631 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7c8076d0-81d1-4691-a66e-4dbc3ab16954-xtables-lock\") pod \"kube-proxy-hqjk5\" (UID: \"7c8076d0-81d1-4691-a66e-4dbc3ab16954\") " pod="kube-system/kube-proxy-hqjk5" Feb 9 09:58:30.642040 kubelet[2631]: I0209 09:58:30.642028 2631 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7c8076d0-81d1-4691-a66e-4dbc3ab16954-lib-modules\") pod \"kube-proxy-hqjk5\" (UID: \"7c8076d0-81d1-4691-a66e-4dbc3ab16954\") " pod="kube-system/kube-proxy-hqjk5" Feb 9 09:58:30.642143 kubelet[2631]: I0209 09:58:30.642131 2631 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bgkvw\" (UniqueName: \"kubernetes.io/projected/7c8076d0-81d1-4691-a66e-4dbc3ab16954-kube-api-access-bgkvw\") pod \"kube-proxy-hqjk5\" (UID: \"7c8076d0-81d1-4691-a66e-4dbc3ab16954\") " pod="kube-system/kube-proxy-hqjk5" Feb 9 09:58:30.803907 kubelet[2631]: I0209 09:58:30.803866 2631 topology_manager.go:210] "Topology Admit Handler" Feb 9 09:58:30.843803 kubelet[2631]: I0209 09:58:30.843758 2631 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/1d0740e5-b4fc-44f4-ba00-0e25ed0b0e72-var-lib-calico\") pod \"tigera-operator-cfc98749c-dhhwn\" (UID: \"1d0740e5-b4fc-44f4-ba00-0e25ed0b0e72\") " pod="tigera-operator/tigera-operator-cfc98749c-dhhwn" Feb 9 09:58:30.844062 kubelet[2631]: I0209 09:58:30.844047 2631 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jx6m9\" (UniqueName: \"kubernetes.io/projected/1d0740e5-b4fc-44f4-ba00-0e25ed0b0e72-kube-api-access-jx6m9\") pod \"tigera-operator-cfc98749c-dhhwn\" (UID: \"1d0740e5-b4fc-44f4-ba00-0e25ed0b0e72\") " pod="tigera-operator/tigera-operator-cfc98749c-dhhwn" Feb 9 09:58:30.921408 env[1448]: time="2024-02-09T09:58:30.921019037Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-hqjk5,Uid:7c8076d0-81d1-4691-a66e-4dbc3ab16954,Namespace:kube-system,Attempt:0,}" Feb 9 09:58:31.107203 env[1448]: time="2024-02-09T09:58:31.107141667Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-cfc98749c-dhhwn,Uid:1d0740e5-b4fc-44f4-ba00-0e25ed0b0e72,Namespace:tigera-operator,Attempt:0,}" Feb 9 09:58:39.434606 env[1448]: time="2024-02-09T09:58:39.434512393Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 09:58:39.434606 env[1448]: time="2024-02-09T09:58:39.434551073Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 09:58:39.434606 env[1448]: time="2024-02-09T09:58:39.434561113Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 09:58:39.435150 env[1448]: time="2024-02-09T09:58:39.435082440Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/17af0ed04d74a935f22467e913553dcb53a8f16c9af40d92cdb2db1d20833518 pid=2738 runtime=io.containerd.runc.v2 Feb 9 09:58:39.438384 env[1448]: time="2024-02-09T09:58:39.437212225Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 09:58:39.438384 env[1448]: time="2024-02-09T09:58:39.437285266Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 09:58:39.438384 env[1448]: time="2024-02-09T09:58:39.437341867Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 09:58:39.438384 env[1448]: time="2024-02-09T09:58:39.437488869Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/b63113fa7e648f4d8c1156f4a594b41f0d291c19ddc7dbe9dd838b6e3ca8eeb1 pid=2752 runtime=io.containerd.runc.v2 Feb 9 09:58:39.503478 env[1448]: time="2024-02-09T09:58:39.503425102Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-hqjk5,Uid:7c8076d0-81d1-4691-a66e-4dbc3ab16954,Namespace:kube-system,Attempt:0,} returns sandbox id \"17af0ed04d74a935f22467e913553dcb53a8f16c9af40d92cdb2db1d20833518\"" Feb 9 09:58:39.510278 env[1448]: time="2024-02-09T09:58:39.510233264Z" level=info msg="CreateContainer within sandbox \"17af0ed04d74a935f22467e913553dcb53a8f16c9af40d92cdb2db1d20833518\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Feb 9 09:58:39.521931 env[1448]: time="2024-02-09T09:58:39.521869484Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-cfc98749c-dhhwn,Uid:1d0740e5-b4fc-44f4-ba00-0e25ed0b0e72,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"b63113fa7e648f4d8c1156f4a594b41f0d291c19ddc7dbe9dd838b6e3ca8eeb1\"" Feb 9 09:58:39.525442 env[1448]: time="2024-02-09T09:58:39.523557784Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.32.3\"" Feb 9 09:58:39.554420 env[1448]: time="2024-02-09T09:58:39.554364554Z" level=info msg="CreateContainer within sandbox \"17af0ed04d74a935f22467e913553dcb53a8f16c9af40d92cdb2db1d20833518\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"3f71ea206b00a93450cfa3662ab56f3995af2ae9c4f30e0ab9c7d35e6a8f3add\"" Feb 9 09:58:39.556978 env[1448]: time="2024-02-09T09:58:39.555099483Z" level=info msg="StartContainer for \"3f71ea206b00a93450cfa3662ab56f3995af2ae9c4f30e0ab9c7d35e6a8f3add\"" Feb 9 09:58:39.612497 env[1448]: time="2024-02-09T09:58:39.612448813Z" level=info msg="StartContainer for \"3f71ea206b00a93450cfa3662ab56f3995af2ae9c4f30e0ab9c7d35e6a8f3add\" returns successfully" Feb 9 09:58:39.671363 kernel: kauditd_printk_skb: 9 callbacks suppressed Feb 9 09:58:39.671490 kernel: audit: type=1325 audit(1707472719.660:236): table=mangle:63 family=2 entries=1 op=nft_register_chain pid=2869 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 09:58:39.660000 audit[2869]: NETFILTER_CFG table=mangle:63 family=2 entries=1 op=nft_register_chain pid=2869 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 09:58:39.660000 audit[2869]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffe1f8fa00 a2=0 a3=ffff8a31e6c0 items=0 ppid=2831 pid=2869 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:58:39.707620 kernel: audit: type=1300 audit(1707472719.660:236): arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffe1f8fa00 a2=0 a3=ffff8a31e6c0 items=0 ppid=2831 pid=2869 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:58:39.660000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Feb 9 09:58:39.723386 kernel: audit: type=1327 audit(1707472719.660:236): proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Feb 9 09:58:39.669000 audit[2870]: NETFILTER_CFG table=mangle:64 family=10 entries=1 op=nft_register_chain pid=2870 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 09:58:39.738128 kernel: audit: type=1325 audit(1707472719.669:237): table=mangle:64 family=10 entries=1 op=nft_register_chain pid=2870 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 09:58:39.669000 audit[2870]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffdb3d1f00 a2=0 a3=ffffb98346c0 items=0 ppid=2831 pid=2870 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:58:39.765960 kernel: audit: type=1300 audit(1707472719.669:237): arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffdb3d1f00 a2=0 a3=ffffb98346c0 items=0 ppid=2831 pid=2870 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:58:39.669000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Feb 9 09:58:39.783402 kernel: audit: type=1327 audit(1707472719.669:237): proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Feb 9 09:58:39.680000 audit[2871]: NETFILTER_CFG table=nat:65 family=10 entries=1 op=nft_register_chain pid=2871 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 09:58:39.797176 kernel: audit: type=1325 audit(1707472719.680:238): table=nat:65 family=10 entries=1 op=nft_register_chain pid=2871 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 09:58:39.680000 audit[2871]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffcc9f27e0 a2=0 a3=ffffb70e36c0 items=0 ppid=2831 pid=2871 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:58:39.825411 kernel: audit: type=1300 audit(1707472719.680:238): arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffcc9f27e0 a2=0 a3=ffffb70e36c0 items=0 ppid=2831 pid=2871 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:58:39.680000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Feb 9 09:58:39.840206 kernel: audit: type=1327 audit(1707472719.680:238): proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Feb 9 09:58:39.708000 audit[2872]: NETFILTER_CFG table=filter:66 family=10 entries=1 op=nft_register_chain pid=2872 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 09:58:39.855857 kernel: audit: type=1325 audit(1707472719.708:239): table=filter:66 family=10 entries=1 op=nft_register_chain pid=2872 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 09:58:39.708000 audit[2872]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffd49964c0 a2=0 a3=ffffa2eda6c0 items=0 ppid=2831 pid=2872 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:58:39.708000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Feb 9 09:58:39.724000 audit[2873]: NETFILTER_CFG table=nat:67 family=2 entries=1 op=nft_register_chain pid=2873 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 09:58:39.724000 audit[2873]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffcbd3aae0 a2=0 a3=ffffa322b6c0 items=0 ppid=2831 pid=2873 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:58:39.724000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Feb 9 09:58:39.738000 audit[2874]: NETFILTER_CFG table=filter:68 family=2 entries=1 op=nft_register_chain pid=2874 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 09:58:39.738000 audit[2874]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffdc693410 a2=0 a3=ffff91c3b6c0 items=0 ppid=2831 pid=2874 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:58:39.738000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Feb 9 09:58:39.777000 audit[2875]: NETFILTER_CFG table=filter:69 family=2 entries=1 op=nft_register_chain pid=2875 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 09:58:39.777000 audit[2875]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=108 a0=3 a1=ffffc7287830 a2=0 a3=ffffa25636c0 items=0 ppid=2831 pid=2875 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:58:39.777000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Feb 9 09:58:39.781000 audit[2877]: NETFILTER_CFG table=filter:70 family=2 entries=1 op=nft_register_rule pid=2877 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 09:58:39.781000 audit[2877]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=752 a0=3 a1=ffffe4515690 a2=0 a3=ffffbc9446c0 items=0 ppid=2831 pid=2877 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:58:39.781000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276696365 Feb 9 09:58:39.826000 audit[2880]: NETFILTER_CFG table=filter:71 family=2 entries=1 op=nft_register_rule pid=2880 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 09:58:39.826000 audit[2880]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=752 a0=3 a1=ffffd9304610 a2=0 a3=ffff88a6d6c0 items=0 ppid=2831 pid=2880 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:58:39.826000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669 Feb 9 09:58:39.840000 audit[2881]: NETFILTER_CFG table=filter:72 family=2 entries=1 op=nft_register_chain pid=2881 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 09:58:39.840000 audit[2881]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffff31c8f90 a2=0 a3=ffff82f2f6c0 items=0 ppid=2831 pid=2881 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:58:39.840000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Feb 9 09:58:39.859000 audit[2883]: NETFILTER_CFG table=filter:73 family=2 entries=1 op=nft_register_rule pid=2883 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 09:58:39.859000 audit[2883]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=fffff6424ff0 a2=0 a3=ffffab02f6c0 items=0 ppid=2831 pid=2883 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:58:39.859000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Feb 9 09:58:39.861000 audit[2884]: NETFILTER_CFG table=filter:74 family=2 entries=1 op=nft_register_chain pid=2884 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 09:58:39.861000 audit[2884]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffff4a29850 a2=0 a3=ffffa4d726c0 items=0 ppid=2831 pid=2884 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:58:39.861000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Feb 9 09:58:39.864000 audit[2886]: NETFILTER_CFG table=filter:75 family=2 entries=1 op=nft_register_rule pid=2886 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 09:58:39.864000 audit[2886]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=744 a0=3 a1=fffff5ef2db0 a2=0 a3=ffff94a356c0 items=0 ppid=2831 pid=2886 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:58:39.864000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Feb 9 09:58:39.867000 audit[2889]: NETFILTER_CFG table=filter:76 family=2 entries=1 op=nft_register_rule pid=2889 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 09:58:39.867000 audit[2889]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=744 a0=3 a1=ffffe94382b0 a2=0 a3=ffffaa6516c0 items=0 ppid=2831 pid=2889 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:58:39.867000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D53 Feb 9 09:58:39.869000 audit[2890]: NETFILTER_CFG table=filter:77 family=2 entries=1 op=nft_register_chain pid=2890 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 09:58:39.869000 audit[2890]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffe2bb4f40 a2=0 a3=ffffa3e646c0 items=0 ppid=2831 pid=2890 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:58:39.869000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Feb 9 09:58:39.871000 audit[2892]: NETFILTER_CFG table=filter:78 family=2 entries=1 op=nft_register_rule pid=2892 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 09:58:39.871000 audit[2892]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=fffff7ceada0 a2=0 a3=ffffaa2a86c0 items=0 ppid=2831 pid=2892 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:58:39.871000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Feb 9 09:58:39.873000 audit[2893]: NETFILTER_CFG table=filter:79 family=2 entries=1 op=nft_register_chain pid=2893 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 09:58:39.873000 audit[2893]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffd4e95100 a2=0 a3=ffffbf6d76c0 items=0 ppid=2831 pid=2893 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:58:39.873000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Feb 9 09:58:39.876000 audit[2895]: NETFILTER_CFG table=filter:80 family=2 entries=1 op=nft_register_rule pid=2895 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 09:58:39.876000 audit[2895]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffd6e70510 a2=0 a3=ffffb55f96c0 items=0 ppid=2831 pid=2895 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:58:39.876000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Feb 9 09:58:39.880000 audit[2898]: NETFILTER_CFG table=filter:81 family=2 entries=1 op=nft_register_rule pid=2898 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 09:58:39.880000 audit[2898]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffe1f5df40 a2=0 a3=ffff8d0546c0 items=0 ppid=2831 pid=2898 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:58:39.880000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Feb 9 09:58:39.884000 audit[2901]: NETFILTER_CFG table=filter:82 family=2 entries=1 op=nft_register_rule pid=2901 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 09:58:39.884000 audit[2901]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffcfb0ef70 a2=0 a3=ffffb918c6c0 items=0 ppid=2831 pid=2901 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:58:39.884000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Feb 9 09:58:39.886000 audit[2902]: NETFILTER_CFG table=nat:83 family=2 entries=1 op=nft_register_chain pid=2902 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 09:58:39.886000 audit[2902]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=ffffe1c06d00 a2=0 a3=ffffbe0806c0 items=0 ppid=2831 pid=2902 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:58:39.886000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Feb 9 09:58:39.889000 audit[2904]: NETFILTER_CFG table=nat:84 family=2 entries=1 op=nft_register_rule pid=2904 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 09:58:39.889000 audit[2904]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=524 a0=3 a1=ffffdced9cd0 a2=0 a3=ffffbcc0b6c0 items=0 ppid=2831 pid=2904 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:58:39.889000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Feb 9 09:58:39.892000 audit[2907]: NETFILTER_CFG table=nat:85 family=2 entries=1 op=nft_register_rule pid=2907 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 09:58:39.892000 audit[2907]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffe60d40f0 a2=0 a3=ffff964be6c0 items=0 ppid=2831 pid=2907 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:58:39.892000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Feb 9 09:58:39.916000 audit[2911]: NETFILTER_CFG table=filter:86 family=2 entries=6 op=nft_register_rule pid=2911 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 09:58:39.916000 audit[2911]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=4028 a0=3 a1=ffffe269ae30 a2=0 a3=ffff86eb56c0 items=0 ppid=2831 pid=2911 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:58:39.916000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 09:58:39.925000 audit[2911]: NETFILTER_CFG table=nat:87 family=2 entries=17 op=nft_register_chain pid=2911 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 09:58:39.925000 audit[2911]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5340 a0=3 a1=ffffe269ae30 a2=0 a3=ffff86eb56c0 items=0 ppid=2831 pid=2911 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:58:39.925000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 09:58:39.930000 audit[2915]: NETFILTER_CFG table=filter:88 family=10 entries=1 op=nft_register_chain pid=2915 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 09:58:39.930000 audit[2915]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=108 a0=3 a1=ffffdac40d30 a2=0 a3=ffffa69486c0 items=0 ppid=2831 pid=2915 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:58:39.930000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Feb 9 09:58:39.933000 audit[2917]: NETFILTER_CFG table=filter:89 family=10 entries=2 op=nft_register_chain pid=2917 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 09:58:39.933000 audit[2917]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=836 a0=3 a1=ffffead0eb90 a2=0 a3=ffff884c66c0 items=0 ppid=2831 pid=2917 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:58:39.933000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C6520736572766963 Feb 9 09:58:39.936000 audit[2920]: NETFILTER_CFG table=filter:90 family=10 entries=2 op=nft_register_chain pid=2920 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 09:58:39.936000 audit[2920]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=836 a0=3 a1=fffff310bac0 a2=0 a3=ffffad6db6c0 items=0 ppid=2831 pid=2920 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:58:39.936000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276 Feb 9 09:58:39.937000 audit[2921]: NETFILTER_CFG table=filter:91 family=10 entries=1 op=nft_register_chain pid=2921 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 09:58:39.937000 audit[2921]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffff127bf30 a2=0 a3=ffffae2646c0 items=0 ppid=2831 pid=2921 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:58:39.937000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Feb 9 09:58:39.939000 audit[2923]: NETFILTER_CFG table=filter:92 family=10 entries=1 op=nft_register_rule pid=2923 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 09:58:39.939000 audit[2923]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffeb40e4e0 a2=0 a3=ffffabe116c0 items=0 ppid=2831 pid=2923 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:58:39.939000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Feb 9 09:58:39.940000 audit[2924]: NETFILTER_CFG table=filter:93 family=10 entries=1 op=nft_register_chain pid=2924 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 09:58:39.940000 audit[2924]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffd2d5fe60 a2=0 a3=ffff952876c0 items=0 ppid=2831 pid=2924 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:58:39.940000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Feb 9 09:58:39.942000 audit[2926]: NETFILTER_CFG table=filter:94 family=10 entries=1 op=nft_register_rule pid=2926 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 09:58:39.942000 audit[2926]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=744 a0=3 a1=ffffc85e1ea0 a2=0 a3=ffff86b936c0 items=0 ppid=2831 pid=2926 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:58:39.942000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B554245 Feb 9 09:58:39.946000 audit[2929]: NETFILTER_CFG table=filter:95 family=10 entries=2 op=nft_register_chain pid=2929 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 09:58:39.946000 audit[2929]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=828 a0=3 a1=ffffe472eda0 a2=0 a3=ffffb9bc26c0 items=0 ppid=2831 pid=2929 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:58:39.946000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Feb 9 09:58:39.947000 audit[2930]: NETFILTER_CFG table=filter:96 family=10 entries=1 op=nft_register_chain pid=2930 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 09:58:39.947000 audit[2930]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffff3637400 a2=0 a3=ffff831706c0 items=0 ppid=2831 pid=2930 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:58:39.947000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Feb 9 09:58:39.949000 audit[2932]: NETFILTER_CFG table=filter:97 family=10 entries=1 op=nft_register_rule pid=2932 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 09:58:39.949000 audit[2932]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffdea87330 a2=0 a3=ffffb58816c0 items=0 ppid=2831 pid=2932 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:58:39.949000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Feb 9 09:58:39.950000 audit[2933]: NETFILTER_CFG table=filter:98 family=10 entries=1 op=nft_register_chain pid=2933 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 09:58:39.950000 audit[2933]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffd80bdba0 a2=0 a3=ffff854136c0 items=0 ppid=2831 pid=2933 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:58:39.950000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Feb 9 09:58:39.953000 audit[2935]: NETFILTER_CFG table=filter:99 family=10 entries=1 op=nft_register_rule pid=2935 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 09:58:39.953000 audit[2935]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffce1faa20 a2=0 a3=ffffa3ad56c0 items=0 ppid=2831 pid=2935 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:58:39.953000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Feb 9 09:58:39.957000 audit[2938]: NETFILTER_CFG table=filter:100 family=10 entries=1 op=nft_register_rule pid=2938 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 09:58:39.957000 audit[2938]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffd08d3de0 a2=0 a3=ffffb09116c0 items=0 ppid=2831 pid=2938 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:58:39.957000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Feb 9 09:58:39.960000 audit[2941]: NETFILTER_CFG table=filter:101 family=10 entries=1 op=nft_register_rule pid=2941 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 09:58:39.960000 audit[2941]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffe962a650 a2=0 a3=ffffa35066c0 items=0 ppid=2831 pid=2941 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:58:39.960000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C Feb 9 09:58:39.970000 audit[2942]: NETFILTER_CFG table=nat:102 family=10 entries=1 op=nft_register_chain pid=2942 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 09:58:39.970000 audit[2942]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=ffffc05c8680 a2=0 a3=ffffaf7366c0 items=0 ppid=2831 pid=2942 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:58:39.970000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Feb 9 09:58:39.973000 audit[2944]: NETFILTER_CFG table=nat:103 family=10 entries=2 op=nft_register_chain pid=2944 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 09:58:39.973000 audit[2944]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=600 a0=3 a1=ffffe0cdd790 a2=0 a3=ffffaf5f16c0 items=0 ppid=2831 pid=2944 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:58:39.973000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Feb 9 09:58:39.976000 audit[2947]: NETFILTER_CFG table=nat:104 family=10 entries=2 op=nft_register_chain pid=2947 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 09:58:39.976000 audit[2947]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=608 a0=3 a1=fffff21d4420 a2=0 a3=ffff8665c6c0 items=0 ppid=2831 pid=2947 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:58:39.976000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Feb 9 09:58:39.983000 audit[2951]: NETFILTER_CFG table=filter:105 family=10 entries=3 op=nft_register_rule pid=2951 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Feb 9 09:58:39.983000 audit[2951]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=1916 a0=3 a1=ffffc39d6e90 a2=0 a3=ffffa85096c0 items=0 ppid=2831 pid=2951 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:58:39.983000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 09:58:39.983000 audit[2951]: NETFILTER_CFG table=nat:106 family=10 entries=10 op=nft_register_chain pid=2951 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Feb 9 09:58:39.983000 audit[2951]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=1968 a0=3 a1=ffffc39d6e90 a2=0 a3=ffffa85096c0 items=0 ppid=2831 pid=2951 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:58:39.983000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 09:58:40.425452 systemd[1]: run-containerd-runc-k8s.io-b63113fa7e648f4d8c1156f4a594b41f0d291c19ddc7dbe9dd838b6e3ca8eeb1-runc.hT5r8p.mount: Deactivated successfully. Feb 9 09:58:40.611664 kubelet[2631]: I0209 09:58:40.611531 2631 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-hqjk5" podStartSLOduration=10.611491873 pod.CreationTimestamp="2024-02-09 09:58:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 09:58:40.609060364 +0000 UTC m=+25.473244690" watchObservedRunningTime="2024-02-09 09:58:40.611491873 +0000 UTC m=+25.475676159" Feb 9 09:58:41.004728 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount451512320.mount: Deactivated successfully. Feb 9 09:58:42.659596 env[1448]: time="2024-02-09T09:58:42.659545364Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/tigera/operator:v1.32.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:58:42.678137 env[1448]: time="2024-02-09T09:58:42.678079257Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:c7a10ec867a90652f951a6ba5a12efb94165e0a1c9b72167810d1065e57d768f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:58:42.756122 env[1448]: time="2024-02-09T09:58:42.756075191Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/tigera/operator:v1.32.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:58:43.192235 env[1448]: time="2024-02-09T09:58:43.192182199Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/tigera/operator@sha256:715ac9a30f8a9579e44258af20de354715429e11836b493918e9e1a696e9b028,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:58:43.193623 env[1448]: time="2024-02-09T09:58:43.193079729Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.32.3\" returns image reference \"sha256:c7a10ec867a90652f951a6ba5a12efb94165e0a1c9b72167810d1065e57d768f\"" Feb 9 09:58:43.196762 env[1448]: time="2024-02-09T09:58:43.196713290Z" level=info msg="CreateContainer within sandbox \"b63113fa7e648f4d8c1156f4a594b41f0d291c19ddc7dbe9dd838b6e3ca8eeb1\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Feb 9 09:58:43.992018 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1194166038.mount: Deactivated successfully. Feb 9 09:58:43.999104 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2866107720.mount: Deactivated successfully. Feb 9 09:58:44.031598 env[1448]: time="2024-02-09T09:58:44.031548314Z" level=info msg="CreateContainer within sandbox \"b63113fa7e648f4d8c1156f4a594b41f0d291c19ddc7dbe9dd838b6e3ca8eeb1\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"46a42d1396039a7bff7973fc52381f59ee9425ca654022ed1c5582f30ed368e8\"" Feb 9 09:58:44.034475 env[1448]: time="2024-02-09T09:58:44.034283384Z" level=info msg="StartContainer for \"46a42d1396039a7bff7973fc52381f59ee9425ca654022ed1c5582f30ed368e8\"" Feb 9 09:58:44.097020 env[1448]: time="2024-02-09T09:58:44.096480796Z" level=info msg="StartContainer for \"46a42d1396039a7bff7973fc52381f59ee9425ca654022ed1c5582f30ed368e8\" returns successfully" Feb 9 09:58:44.618087 kubelet[2631]: I0209 09:58:44.618045 2631 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="tigera-operator/tigera-operator-cfc98749c-dhhwn" podStartSLOduration=-9.223372022236769e+09 pod.CreationTimestamp="2024-02-09 09:58:30 +0000 UTC" firstStartedPulling="2024-02-09 09:58:39.523017537 +0000 UTC m=+24.387201863" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 09:58:44.617896436 +0000 UTC m=+29.482080722" watchObservedRunningTime="2024-02-09 09:58:44.618006237 +0000 UTC m=+29.482190563" Feb 9 09:58:47.152000 audit[3015]: NETFILTER_CFG table=filter:107 family=2 entries=12 op=nft_register_rule pid=3015 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 09:58:47.159043 kernel: kauditd_printk_skb: 122 callbacks suppressed Feb 9 09:58:47.159151 kernel: audit: type=1325 audit(1707472727.152:280): table=filter:107 family=2 entries=12 op=nft_register_rule pid=3015 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 09:58:47.152000 audit[3015]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=4028 a0=3 a1=ffffd9f30a00 a2=0 a3=ffffb12636c0 items=0 ppid=2831 pid=3015 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:58:47.200728 kernel: audit: type=1300 audit(1707472727.152:280): arch=c00000b7 syscall=211 success=yes exit=4028 a0=3 a1=ffffd9f30a00 a2=0 a3=ffffb12636c0 items=0 ppid=2831 pid=3015 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:58:47.152000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 09:58:47.214799 kernel: audit: type=1327 audit(1707472727.152:280): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 09:58:47.153000 audit[3015]: NETFILTER_CFG table=nat:108 family=2 entries=20 op=nft_register_rule pid=3015 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 09:58:47.231017 kernel: audit: type=1325 audit(1707472727.153:281): table=nat:108 family=2 entries=20 op=nft_register_rule pid=3015 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 09:58:47.153000 audit[3015]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5340 a0=3 a1=ffffd9f30a00 a2=0 a3=ffffb12636c0 items=0 ppid=2831 pid=3015 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:58:47.259216 kernel: audit: type=1300 audit(1707472727.153:281): arch=c00000b7 syscall=211 success=yes exit=5340 a0=3 a1=ffffd9f30a00 a2=0 a3=ffffb12636c0 items=0 ppid=2831 pid=3015 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:58:47.153000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 09:58:47.273982 kernel: audit: type=1327 audit(1707472727.153:281): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 09:58:47.290239 kubelet[2631]: I0209 09:58:47.290207 2631 topology_manager.go:210] "Topology Admit Handler" Feb 9 09:58:47.334516 kubelet[2631]: I0209 09:58:47.334485 2631 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c74a0ab7-73f9-4f41-bc7c-7f16d8a4958c-tigera-ca-bundle\") pod \"calico-typha-6994f59946-fhpjt\" (UID: \"c74a0ab7-73f9-4f41-bc7c-7f16d8a4958c\") " pod="calico-system/calico-typha-6994f59946-fhpjt" Feb 9 09:58:47.334829 kubelet[2631]: I0209 09:58:47.334817 2631 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/c74a0ab7-73f9-4f41-bc7c-7f16d8a4958c-typha-certs\") pod \"calico-typha-6994f59946-fhpjt\" (UID: \"c74a0ab7-73f9-4f41-bc7c-7f16d8a4958c\") " pod="calico-system/calico-typha-6994f59946-fhpjt" Feb 9 09:58:47.334987 kubelet[2631]: I0209 09:58:47.334974 2631 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-697xx\" (UniqueName: \"kubernetes.io/projected/c74a0ab7-73f9-4f41-bc7c-7f16d8a4958c-kube-api-access-697xx\") pod \"calico-typha-6994f59946-fhpjt\" (UID: \"c74a0ab7-73f9-4f41-bc7c-7f16d8a4958c\") " pod="calico-system/calico-typha-6994f59946-fhpjt" Feb 9 09:58:47.368000 audit[3041]: NETFILTER_CFG table=filter:109 family=2 entries=13 op=nft_register_rule pid=3041 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 09:58:47.368000 audit[3041]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=4732 a0=3 a1=fffffce3ddb0 a2=0 a3=ffff812296c0 items=0 ppid=2831 pid=3041 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:58:47.398896 kubelet[2631]: I0209 09:58:47.398860 2631 topology_manager.go:210] "Topology Admit Handler" Feb 9 09:58:47.415017 kernel: audit: type=1325 audit(1707472727.368:282): table=filter:109 family=2 entries=13 op=nft_register_rule pid=3041 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 09:58:47.415153 kernel: audit: type=1300 audit(1707472727.368:282): arch=c00000b7 syscall=211 success=yes exit=4732 a0=3 a1=fffffce3ddb0 a2=0 a3=ffff812296c0 items=0 ppid=2831 pid=3041 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:58:47.368000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 09:58:47.435362 kubelet[2631]: I0209 09:58:47.435338 2631 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/d1f0fc46-7e2b-4488-a6cc-bc3b70c5e3da-flexvol-driver-host\") pod \"calico-node-x875w\" (UID: \"d1f0fc46-7e2b-4488-a6cc-bc3b70c5e3da\") " pod="calico-system/calico-node-x875w" Feb 9 09:58:47.435529 kernel: audit: type=1327 audit(1707472727.368:282): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 09:58:47.435621 kubelet[2631]: I0209 09:58:47.435609 2631 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/d1f0fc46-7e2b-4488-a6cc-bc3b70c5e3da-var-lib-calico\") pod \"calico-node-x875w\" (UID: \"d1f0fc46-7e2b-4488-a6cc-bc3b70c5e3da\") " pod="calico-system/calico-node-x875w" Feb 9 09:58:47.435715 kubelet[2631]: I0209 09:58:47.435705 2631 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/d1f0fc46-7e2b-4488-a6cc-bc3b70c5e3da-cni-bin-dir\") pod \"calico-node-x875w\" (UID: \"d1f0fc46-7e2b-4488-a6cc-bc3b70c5e3da\") " pod="calico-system/calico-node-x875w" Feb 9 09:58:47.435803 kubelet[2631]: I0209 09:58:47.435793 2631 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/d1f0fc46-7e2b-4488-a6cc-bc3b70c5e3da-cni-log-dir\") pod \"calico-node-x875w\" (UID: \"d1f0fc46-7e2b-4488-a6cc-bc3b70c5e3da\") " pod="calico-system/calico-node-x875w" Feb 9 09:58:47.435975 kubelet[2631]: I0209 09:58:47.435935 2631 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/d1f0fc46-7e2b-4488-a6cc-bc3b70c5e3da-var-run-calico\") pod \"calico-node-x875w\" (UID: \"d1f0fc46-7e2b-4488-a6cc-bc3b70c5e3da\") " pod="calico-system/calico-node-x875w" Feb 9 09:58:47.436078 kubelet[2631]: I0209 09:58:47.436066 2631 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/d1f0fc46-7e2b-4488-a6cc-bc3b70c5e3da-policysync\") pod \"calico-node-x875w\" (UID: \"d1f0fc46-7e2b-4488-a6cc-bc3b70c5e3da\") " pod="calico-system/calico-node-x875w" Feb 9 09:58:47.436151 kubelet[2631]: I0209 09:58:47.436142 2631 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d1f0fc46-7e2b-4488-a6cc-bc3b70c5e3da-tigera-ca-bundle\") pod \"calico-node-x875w\" (UID: \"d1f0fc46-7e2b-4488-a6cc-bc3b70c5e3da\") " pod="calico-system/calico-node-x875w" Feb 9 09:58:47.436226 kubelet[2631]: I0209 09:58:47.436216 2631 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/d1f0fc46-7e2b-4488-a6cc-bc3b70c5e3da-cni-net-dir\") pod \"calico-node-x875w\" (UID: \"d1f0fc46-7e2b-4488-a6cc-bc3b70c5e3da\") " pod="calico-system/calico-node-x875w" Feb 9 09:58:47.369000 audit[3041]: NETFILTER_CFG table=nat:110 family=2 entries=20 op=nft_register_rule pid=3041 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 09:58:47.436426 kubelet[2631]: I0209 09:58:47.436413 2631 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c2fbq\" (UniqueName: \"kubernetes.io/projected/d1f0fc46-7e2b-4488-a6cc-bc3b70c5e3da-kube-api-access-c2fbq\") pod \"calico-node-x875w\" (UID: \"d1f0fc46-7e2b-4488-a6cc-bc3b70c5e3da\") " pod="calico-system/calico-node-x875w" Feb 9 09:58:47.436541 kubelet[2631]: I0209 09:58:47.436530 2631 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d1f0fc46-7e2b-4488-a6cc-bc3b70c5e3da-lib-modules\") pod \"calico-node-x875w\" (UID: \"d1f0fc46-7e2b-4488-a6cc-bc3b70c5e3da\") " pod="calico-system/calico-node-x875w" Feb 9 09:58:47.436636 kubelet[2631]: I0209 09:58:47.436626 2631 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/d1f0fc46-7e2b-4488-a6cc-bc3b70c5e3da-node-certs\") pod \"calico-node-x875w\" (UID: \"d1f0fc46-7e2b-4488-a6cc-bc3b70c5e3da\") " pod="calico-system/calico-node-x875w" Feb 9 09:58:47.436736 kubelet[2631]: I0209 09:58:47.436725 2631 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d1f0fc46-7e2b-4488-a6cc-bc3b70c5e3da-xtables-lock\") pod \"calico-node-x875w\" (UID: \"d1f0fc46-7e2b-4488-a6cc-bc3b70c5e3da\") " pod="calico-system/calico-node-x875w" Feb 9 09:58:47.451969 kernel: audit: type=1325 audit(1707472727.369:283): table=nat:110 family=2 entries=20 op=nft_register_rule pid=3041 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 09:58:47.369000 audit[3041]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5340 a0=3 a1=fffffce3ddb0 a2=0 a3=ffff812296c0 items=0 ppid=2831 pid=3041 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:58:47.369000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 09:58:47.549361 kubelet[2631]: I0209 09:58:47.547153 2631 topology_manager.go:210] "Topology Admit Handler" Feb 9 09:58:47.549361 kubelet[2631]: E0209 09:58:47.547457 2631 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-s2mw5" podUID=99a671b0-f7e8-4988-baf5-8e0d96bfea44 Feb 9 09:58:47.558399 kubelet[2631]: E0209 09:58:47.556429 2631 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 09:58:47.558399 kubelet[2631]: W0209 09:58:47.556454 2631 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 09:58:47.558399 kubelet[2631]: E0209 09:58:47.556491 2631 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 09:58:47.558399 kubelet[2631]: E0209 09:58:47.556670 2631 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 09:58:47.558399 kubelet[2631]: W0209 09:58:47.556679 2631 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 09:58:47.558399 kubelet[2631]: E0209 09:58:47.556697 2631 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 09:58:47.558399 kubelet[2631]: E0209 09:58:47.556859 2631 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 09:58:47.558399 kubelet[2631]: W0209 09:58:47.556869 2631 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 09:58:47.558399 kubelet[2631]: E0209 09:58:47.556885 2631 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 09:58:47.558399 kubelet[2631]: E0209 09:58:47.557065 2631 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 09:58:47.558760 kubelet[2631]: W0209 09:58:47.557074 2631 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 09:58:47.558760 kubelet[2631]: E0209 09:58:47.557086 2631 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 09:58:47.558760 kubelet[2631]: E0209 09:58:47.557213 2631 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 09:58:47.558760 kubelet[2631]: W0209 09:58:47.557221 2631 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 09:58:47.558760 kubelet[2631]: E0209 09:58:47.557231 2631 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 09:58:47.570747 kubelet[2631]: E0209 09:58:47.570707 2631 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 09:58:47.570747 kubelet[2631]: W0209 09:58:47.570734 2631 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 09:58:47.570890 kubelet[2631]: E0209 09:58:47.570755 2631 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 09:58:47.593665 env[1448]: time="2024-02-09T09:58:47.593262524Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6994f59946-fhpjt,Uid:c74a0ab7-73f9-4f41-bc7c-7f16d8a4958c,Namespace:calico-system,Attempt:0,}" Feb 9 09:58:47.619238 kubelet[2631]: E0209 09:58:47.619200 2631 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 09:58:47.619238 kubelet[2631]: W0209 09:58:47.619227 2631 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 09:58:47.619238 kubelet[2631]: E0209 09:58:47.619248 2631 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 09:58:47.619613 kubelet[2631]: E0209 09:58:47.619589 2631 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 09:58:47.619613 kubelet[2631]: W0209 09:58:47.619605 2631 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 09:58:47.619710 kubelet[2631]: E0209 09:58:47.619622 2631 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 09:58:47.619908 kubelet[2631]: E0209 09:58:47.619885 2631 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 09:58:47.619908 kubelet[2631]: W0209 09:58:47.619901 2631 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 09:58:47.619988 kubelet[2631]: E0209 09:58:47.619916 2631 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 09:58:47.620717 kubelet[2631]: E0209 09:58:47.620690 2631 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 09:58:47.620717 kubelet[2631]: W0209 09:58:47.620718 2631 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 09:58:47.620823 kubelet[2631]: E0209 09:58:47.620734 2631 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 09:58:47.621787 kubelet[2631]: E0209 09:58:47.621761 2631 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 09:58:47.621787 kubelet[2631]: W0209 09:58:47.621783 2631 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 09:58:47.621878 kubelet[2631]: E0209 09:58:47.621798 2631 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 09:58:47.622061 kubelet[2631]: E0209 09:58:47.622036 2631 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 09:58:47.622061 kubelet[2631]: W0209 09:58:47.622056 2631 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 09:58:47.622135 kubelet[2631]: E0209 09:58:47.622069 2631 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 09:58:47.622403 kubelet[2631]: E0209 09:58:47.622387 2631 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 09:58:47.622475 kubelet[2631]: W0209 09:58:47.622462 2631 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 09:58:47.622567 kubelet[2631]: E0209 09:58:47.622557 2631 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 09:58:47.622929 kubelet[2631]: E0209 09:58:47.622914 2631 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 09:58:47.623019 kubelet[2631]: W0209 09:58:47.623007 2631 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 09:58:47.623079 kubelet[2631]: E0209 09:58:47.623069 2631 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 09:58:47.623363 kubelet[2631]: E0209 09:58:47.623348 2631 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 09:58:47.623460 kubelet[2631]: W0209 09:58:47.623446 2631 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 09:58:47.623520 kubelet[2631]: E0209 09:58:47.623511 2631 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 09:58:47.624147 kubelet[2631]: E0209 09:58:47.624104 2631 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 09:58:47.624823 kubelet[2631]: W0209 09:58:47.624800 2631 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 09:58:47.624914 kubelet[2631]: E0209 09:58:47.624901 2631 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 09:58:47.625213 kubelet[2631]: E0209 09:58:47.625199 2631 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 09:58:47.625292 kubelet[2631]: W0209 09:58:47.625280 2631 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 09:58:47.625379 kubelet[2631]: E0209 09:58:47.625369 2631 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 09:58:47.626476 kubelet[2631]: E0209 09:58:47.626458 2631 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 09:58:47.626578 kubelet[2631]: W0209 09:58:47.626564 2631 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 09:58:47.626639 kubelet[2631]: E0209 09:58:47.626630 2631 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 09:58:47.627105 kubelet[2631]: E0209 09:58:47.627091 2631 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 09:58:47.627183 kubelet[2631]: W0209 09:58:47.627171 2631 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 09:58:47.627237 kubelet[2631]: E0209 09:58:47.627228 2631 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 09:58:47.627492 kubelet[2631]: E0209 09:58:47.627479 2631 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 09:58:47.627580 kubelet[2631]: W0209 09:58:47.627566 2631 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 09:58:47.627642 kubelet[2631]: E0209 09:58:47.627632 2631 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 09:58:47.628225 kubelet[2631]: E0209 09:58:47.628210 2631 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 09:58:47.628335 kubelet[2631]: W0209 09:58:47.628288 2631 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 09:58:47.628398 kubelet[2631]: E0209 09:58:47.628386 2631 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 09:58:47.639251 kubelet[2631]: E0209 09:58:47.639224 2631 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 09:58:47.639251 kubelet[2631]: W0209 09:58:47.639247 2631 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 09:58:47.639403 kubelet[2631]: E0209 09:58:47.639267 2631 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 09:58:47.639403 kubelet[2631]: I0209 09:58:47.639298 2631 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/99a671b0-f7e8-4988-baf5-8e0d96bfea44-socket-dir\") pod \"csi-node-driver-s2mw5\" (UID: \"99a671b0-f7e8-4988-baf5-8e0d96bfea44\") " pod="calico-system/csi-node-driver-s2mw5" Feb 9 09:58:47.639486 kubelet[2631]: E0209 09:58:47.639465 2631 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 09:58:47.639486 kubelet[2631]: W0209 09:58:47.639480 2631 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 09:58:47.639562 kubelet[2631]: E0209 09:58:47.639491 2631 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 09:58:47.639562 kubelet[2631]: I0209 09:58:47.639509 2631 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gxczv\" (UniqueName: \"kubernetes.io/projected/99a671b0-f7e8-4988-baf5-8e0d96bfea44-kube-api-access-gxczv\") pod \"csi-node-driver-s2mw5\" (UID: \"99a671b0-f7e8-4988-baf5-8e0d96bfea44\") " pod="calico-system/csi-node-driver-s2mw5" Feb 9 09:58:47.639787 kubelet[2631]: E0209 09:58:47.639769 2631 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 09:58:47.639787 kubelet[2631]: W0209 09:58:47.639783 2631 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 09:58:47.639872 kubelet[2631]: E0209 09:58:47.639803 2631 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 09:58:47.639872 kubelet[2631]: I0209 09:58:47.639820 2631 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/99a671b0-f7e8-4988-baf5-8e0d96bfea44-varrun\") pod \"csi-node-driver-s2mw5\" (UID: \"99a671b0-f7e8-4988-baf5-8e0d96bfea44\") " pod="calico-system/csi-node-driver-s2mw5" Feb 9 09:58:47.639968 kubelet[2631]: E0209 09:58:47.639953 2631 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 09:58:47.639968 kubelet[2631]: W0209 09:58:47.639965 2631 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 09:58:47.640025 kubelet[2631]: E0209 09:58:47.639976 2631 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 09:58:47.640170 kubelet[2631]: E0209 09:58:47.640156 2631 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 09:58:47.640170 kubelet[2631]: W0209 09:58:47.640167 2631 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 09:58:47.640243 kubelet[2631]: E0209 09:58:47.640178 2631 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 09:58:47.640369 kubelet[2631]: E0209 09:58:47.640355 2631 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 09:58:47.640369 kubelet[2631]: W0209 09:58:47.640367 2631 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 09:58:47.640431 kubelet[2631]: E0209 09:58:47.640378 2631 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 09:58:47.640586 kubelet[2631]: E0209 09:58:47.640571 2631 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 09:58:47.640586 kubelet[2631]: W0209 09:58:47.640582 2631 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 09:58:47.640664 kubelet[2631]: E0209 09:58:47.640592 2631 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 09:58:47.640772 kubelet[2631]: E0209 09:58:47.640758 2631 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 09:58:47.640772 kubelet[2631]: W0209 09:58:47.640770 2631 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 09:58:47.640828 kubelet[2631]: E0209 09:58:47.640781 2631 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 09:58:47.640971 kubelet[2631]: E0209 09:58:47.640957 2631 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 09:58:47.640971 kubelet[2631]: W0209 09:58:47.640968 2631 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 09:58:47.641056 kubelet[2631]: E0209 09:58:47.640977 2631 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 09:58:47.641216 kubelet[2631]: E0209 09:58:47.641200 2631 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 09:58:47.641216 kubelet[2631]: W0209 09:58:47.641213 2631 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 09:58:47.641297 kubelet[2631]: E0209 09:58:47.641224 2631 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 09:58:47.641297 kubelet[2631]: I0209 09:58:47.641247 2631 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/99a671b0-f7e8-4988-baf5-8e0d96bfea44-registration-dir\") pod \"csi-node-driver-s2mw5\" (UID: \"99a671b0-f7e8-4988-baf5-8e0d96bfea44\") " pod="calico-system/csi-node-driver-s2mw5" Feb 9 09:58:47.641443 kubelet[2631]: E0209 09:58:47.641416 2631 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 09:58:47.641443 kubelet[2631]: W0209 09:58:47.641430 2631 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 09:58:47.641443 kubelet[2631]: E0209 09:58:47.641441 2631 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 09:58:47.641522 kubelet[2631]: I0209 09:58:47.641459 2631 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/99a671b0-f7e8-4988-baf5-8e0d96bfea44-kubelet-dir\") pod \"csi-node-driver-s2mw5\" (UID: \"99a671b0-f7e8-4988-baf5-8e0d96bfea44\") " pod="calico-system/csi-node-driver-s2mw5" Feb 9 09:58:47.641679 kubelet[2631]: E0209 09:58:47.641664 2631 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 09:58:47.641679 kubelet[2631]: W0209 09:58:47.641676 2631 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 09:58:47.641754 kubelet[2631]: E0209 09:58:47.641687 2631 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 09:58:47.641823 kubelet[2631]: E0209 09:58:47.641811 2631 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 09:58:47.641823 kubelet[2631]: W0209 09:58:47.641821 2631 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 09:58:47.641884 kubelet[2631]: E0209 09:58:47.641830 2631 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 09:58:47.642041 kubelet[2631]: E0209 09:58:47.642025 2631 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 09:58:47.642041 kubelet[2631]: W0209 09:58:47.642038 2631 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 09:58:47.642109 kubelet[2631]: E0209 09:58:47.642048 2631 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 09:58:47.642270 kubelet[2631]: E0209 09:58:47.642244 2631 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 09:58:47.642270 kubelet[2631]: W0209 09:58:47.642264 2631 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 09:58:47.642363 kubelet[2631]: E0209 09:58:47.642277 2631 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 09:58:47.703189 env[1448]: time="2024-02-09T09:58:47.702841211Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-x875w,Uid:d1f0fc46-7e2b-4488-a6cc-bc3b70c5e3da,Namespace:calico-system,Attempt:0,}" Feb 9 09:58:47.742168 kubelet[2631]: E0209 09:58:47.742140 2631 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 09:58:47.742344 kubelet[2631]: W0209 09:58:47.742327 2631 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 09:58:47.742444 kubelet[2631]: E0209 09:58:47.742432 2631 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 09:58:47.742713 kubelet[2631]: E0209 09:58:47.742701 2631 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 09:58:47.742788 kubelet[2631]: W0209 09:58:47.742775 2631 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 09:58:47.742864 kubelet[2631]: E0209 09:58:47.742851 2631 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 09:58:47.743083 kubelet[2631]: E0209 09:58:47.743053 2631 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 09:58:47.743083 kubelet[2631]: W0209 09:58:47.743072 2631 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 09:58:47.743158 kubelet[2631]: E0209 09:58:47.743095 2631 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 09:58:47.743239 kubelet[2631]: E0209 09:58:47.743221 2631 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 09:58:47.743239 kubelet[2631]: W0209 09:58:47.743235 2631 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 09:58:47.743239 kubelet[2631]: E0209 09:58:47.743245 2631 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 09:58:47.743383 kubelet[2631]: E0209 09:58:47.743370 2631 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 09:58:47.743383 kubelet[2631]: W0209 09:58:47.743380 2631 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 09:58:47.743445 kubelet[2631]: E0209 09:58:47.743395 2631 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 09:58:47.743592 kubelet[2631]: E0209 09:58:47.743572 2631 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 09:58:47.743592 kubelet[2631]: W0209 09:58:47.743585 2631 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 09:58:47.743647 kubelet[2631]: E0209 09:58:47.743595 2631 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 09:58:47.743750 kubelet[2631]: E0209 09:58:47.743732 2631 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 09:58:47.743750 kubelet[2631]: W0209 09:58:47.743746 2631 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 09:58:47.743812 kubelet[2631]: E0209 09:58:47.743763 2631 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 09:58:47.743912 kubelet[2631]: E0209 09:58:47.743894 2631 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 09:58:47.743912 kubelet[2631]: W0209 09:58:47.743908 2631 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 09:58:47.743969 kubelet[2631]: E0209 09:58:47.743920 2631 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 09:58:47.744073 kubelet[2631]: E0209 09:58:47.744049 2631 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 09:58:47.744073 kubelet[2631]: W0209 09:58:47.744064 2631 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 09:58:47.744138 kubelet[2631]: E0209 09:58:47.744077 2631 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 09:58:47.744250 kubelet[2631]: E0209 09:58:47.744236 2631 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 09:58:47.744250 kubelet[2631]: W0209 09:58:47.744249 2631 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 09:58:47.744317 kubelet[2631]: E0209 09:58:47.744265 2631 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 09:58:47.744450 kubelet[2631]: E0209 09:58:47.744433 2631 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 09:58:47.744450 kubelet[2631]: W0209 09:58:47.744446 2631 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 09:58:47.744557 kubelet[2631]: E0209 09:58:47.744539 2631 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 09:58:47.744631 kubelet[2631]: E0209 09:58:47.744568 2631 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 09:58:47.744693 kubelet[2631]: W0209 09:58:47.744680 2631 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 09:58:47.744781 kubelet[2631]: E0209 09:58:47.744759 2631 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 09:58:47.746373 kubelet[2631]: E0209 09:58:47.746359 2631 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 09:58:47.746463 kubelet[2631]: W0209 09:58:47.746449 2631 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 09:58:47.746537 kubelet[2631]: E0209 09:58:47.746527 2631 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 09:58:47.746747 kubelet[2631]: E0209 09:58:47.746711 2631 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 09:58:47.746747 kubelet[2631]: W0209 09:58:47.746728 2631 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 09:58:47.746747 kubelet[2631]: E0209 09:58:47.746746 2631 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 09:58:47.746888 kubelet[2631]: E0209 09:58:47.746875 2631 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 09:58:47.746888 kubelet[2631]: W0209 09:58:47.746886 2631 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 09:58:47.746950 kubelet[2631]: E0209 09:58:47.746943 2631 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 09:58:47.747033 kubelet[2631]: E0209 09:58:47.747021 2631 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 09:58:47.747033 kubelet[2631]: W0209 09:58:47.747031 2631 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 09:58:47.747091 kubelet[2631]: E0209 09:58:47.747085 2631 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 09:58:47.747187 kubelet[2631]: E0209 09:58:47.747175 2631 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 09:58:47.747187 kubelet[2631]: W0209 09:58:47.747185 2631 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 09:58:47.747245 kubelet[2631]: E0209 09:58:47.747238 2631 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 09:58:47.747426 kubelet[2631]: E0209 09:58:47.747412 2631 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 09:58:47.747426 kubelet[2631]: W0209 09:58:47.747424 2631 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 09:58:47.747490 kubelet[2631]: E0209 09:58:47.747439 2631 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 09:58:47.747588 kubelet[2631]: E0209 09:58:47.747568 2631 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 09:58:47.747588 kubelet[2631]: W0209 09:58:47.747581 2631 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 09:58:47.747644 kubelet[2631]: E0209 09:58:47.747592 2631 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 09:58:47.747763 kubelet[2631]: E0209 09:58:47.747745 2631 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 09:58:47.747814 kubelet[2631]: W0209 09:58:47.747791 2631 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 09:58:47.747814 kubelet[2631]: E0209 09:58:47.747815 2631 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 09:58:47.748040 kubelet[2631]: E0209 09:58:47.748022 2631 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 09:58:47.748078 kubelet[2631]: W0209 09:58:47.748040 2631 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 09:58:47.748078 kubelet[2631]: E0209 09:58:47.748054 2631 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 09:58:47.748406 kubelet[2631]: E0209 09:58:47.748389 2631 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 09:58:47.748495 kubelet[2631]: W0209 09:58:47.748480 2631 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 09:58:47.748573 kubelet[2631]: E0209 09:58:47.748562 2631 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 09:58:47.748774 kubelet[2631]: E0209 09:58:47.748751 2631 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 09:58:47.748774 kubelet[2631]: W0209 09:58:47.748771 2631 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 09:58:47.748854 kubelet[2631]: E0209 09:58:47.748792 2631 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 09:58:47.749025 kubelet[2631]: E0209 09:58:47.749003 2631 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 09:58:47.749025 kubelet[2631]: W0209 09:58:47.749021 2631 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 09:58:47.749087 kubelet[2631]: E0209 09:58:47.749033 2631 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 09:58:47.749191 kubelet[2631]: E0209 09:58:47.749177 2631 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 09:58:47.749191 kubelet[2631]: W0209 09:58:47.749189 2631 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 09:58:47.749244 kubelet[2631]: E0209 09:58:47.749200 2631 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 09:58:47.844861 kubelet[2631]: E0209 09:58:47.844834 2631 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 09:58:47.845033 kubelet[2631]: W0209 09:58:47.845015 2631 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 09:58:47.845107 kubelet[2631]: E0209 09:58:47.845096 2631 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 09:58:47.925762 env[1448]: time="2024-02-09T09:58:47.925695024Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 09:58:47.926005 env[1448]: time="2024-02-09T09:58:47.925928146Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 09:58:47.926769 env[1448]: time="2024-02-09T09:58:47.926130748Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 09:58:47.926769 env[1448]: time="2024-02-09T09:58:47.926268390Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/4716c7d9f1325f273cf3f9f437a405894f323375ca836f3ac49de65a7f439299 pid=3121 runtime=io.containerd.runc.v2 Feb 9 09:58:47.946494 kubelet[2631]: E0209 09:58:47.946397 2631 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 09:58:47.946494 kubelet[2631]: W0209 09:58:47.946419 2631 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 09:58:47.946494 kubelet[2631]: E0209 09:58:47.946441 2631 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 09:58:47.974595 env[1448]: time="2024-02-09T09:58:47.974461463Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 09:58:47.974763 env[1448]: time="2024-02-09T09:58:47.974737226Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 09:58:47.974866 env[1448]: time="2024-02-09T09:58:47.974844107Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 09:58:47.975108 env[1448]: time="2024-02-09T09:58:47.975079909Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/9498948ab85deaf01e9691c3fb30c9aa7e2619d3ed0078c51cfae97c1f274924 pid=3149 runtime=io.containerd.runc.v2 Feb 9 09:58:48.016964 env[1448]: time="2024-02-09T09:58:48.016912712Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6994f59946-fhpjt,Uid:c74a0ab7-73f9-4f41-bc7c-7f16d8a4958c,Namespace:calico-system,Attempt:0,} returns sandbox id \"4716c7d9f1325f273cf3f9f437a405894f323375ca836f3ac49de65a7f439299\"" Feb 9 09:58:48.028995 env[1448]: time="2024-02-09T09:58:48.028941759Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.27.0\"" Feb 9 09:58:48.047717 kubelet[2631]: E0209 09:58:48.047678 2631 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 09:58:48.047717 kubelet[2631]: W0209 09:58:48.047701 2631 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 09:58:48.047717 kubelet[2631]: E0209 09:58:48.047723 2631 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 09:58:48.055817 env[1448]: time="2024-02-09T09:58:48.055776320Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-x875w,Uid:d1f0fc46-7e2b-4488-a6cc-bc3b70c5e3da,Namespace:calico-system,Attempt:0,} returns sandbox id \"9498948ab85deaf01e9691c3fb30c9aa7e2619d3ed0078c51cfae97c1f274924\"" Feb 9 09:58:48.105190 kubelet[2631]: E0209 09:58:48.105102 2631 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 09:58:48.105190 kubelet[2631]: W0209 09:58:48.105123 2631 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 09:58:48.105190 kubelet[2631]: E0209 09:58:48.105154 2631 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 09:58:48.517000 audit[3228]: NETFILTER_CFG table=filter:111 family=2 entries=14 op=nft_register_rule pid=3228 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 09:58:48.517000 audit[3228]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=4732 a0=3 a1=ffffc2a647a0 a2=0 a3=ffff984646c0 items=0 ppid=2831 pid=3228 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:58:48.517000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 09:58:48.518000 audit[3228]: NETFILTER_CFG table=nat:112 family=2 entries=20 op=nft_register_rule pid=3228 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 09:58:48.518000 audit[3228]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5340 a0=3 a1=ffffc2a647a0 a2=0 a3=ffff984646c0 items=0 ppid=2831 pid=3228 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:58:48.518000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 09:58:49.539379 kubelet[2631]: E0209 09:58:49.539334 2631 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-s2mw5" podUID=99a671b0-f7e8-4988-baf5-8e0d96bfea44 Feb 9 09:58:49.577000 audit[3266]: NETFILTER_CFG table=filter:113 family=2 entries=14 op=nft_register_rule pid=3266 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 09:58:49.577000 audit[3266]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=4732 a0=3 a1=ffffe2adae60 a2=0 a3=ffff869136c0 items=0 ppid=2831 pid=3266 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:58:49.577000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 09:58:49.577000 audit[3266]: NETFILTER_CFG table=nat:114 family=2 entries=20 op=nft_register_rule pid=3266 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 09:58:49.577000 audit[3266]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5340 a0=3 a1=ffffe2adae60 a2=0 a3=ffff869136c0 items=0 ppid=2831 pid=3266 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:58:49.577000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 09:58:50.191027 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1896456365.mount: Deactivated successfully. Feb 9 09:58:50.800907 env[1448]: time="2024-02-09T09:58:50.800861206Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/typha:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:58:50.811884 env[1448]: time="2024-02-09T09:58:50.811841038Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:fba96c9caf161e105c76b559b06b4b2337b89b54833d69984209161d93145969,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:58:50.817875 env[1448]: time="2024-02-09T09:58:50.817836419Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/typha:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:58:50.824195 env[1448]: time="2024-02-09T09:58:50.824157604Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/typha@sha256:5f2d3b8c354a4eb6de46e786889913916e620c6c256982fb8d0f1a1d36a282bc,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:58:50.824678 env[1448]: time="2024-02-09T09:58:50.824641969Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.27.0\" returns image reference \"sha256:fba96c9caf161e105c76b559b06b4b2337b89b54833d69984209161d93145969\"" Feb 9 09:58:50.826367 env[1448]: time="2024-02-09T09:58:50.825424497Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.27.0\"" Feb 9 09:58:50.843224 env[1448]: time="2024-02-09T09:58:50.843175118Z" level=info msg="CreateContainer within sandbox \"4716c7d9f1325f273cf3f9f437a405894f323375ca836f3ac49de65a7f439299\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Feb 9 09:58:51.024112 env[1448]: time="2024-02-09T09:58:51.024048522Z" level=info msg="CreateContainer within sandbox \"4716c7d9f1325f273cf3f9f437a405894f323375ca836f3ac49de65a7f439299\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"aa7230d955fbb2d6988852320569731ff8a0cd5ed036776bee4a7ed4e658fc7b\"" Feb 9 09:58:51.024950 env[1448]: time="2024-02-09T09:58:51.024924611Z" level=info msg="StartContainer for \"aa7230d955fbb2d6988852320569731ff8a0cd5ed036776bee4a7ed4e658fc7b\"" Feb 9 09:58:51.088763 env[1448]: time="2024-02-09T09:58:51.088665973Z" level=info msg="StartContainer for \"aa7230d955fbb2d6988852320569731ff8a0cd5ed036776bee4a7ed4e658fc7b\" returns successfully" Feb 9 09:58:51.183510 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1560052130.mount: Deactivated successfully. Feb 9 09:58:51.539894 kubelet[2631]: E0209 09:58:51.539854 2631 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-s2mw5" podUID=99a671b0-f7e8-4988-baf5-8e0d96bfea44 Feb 9 09:58:51.620927 env[1448]: time="2024-02-09T09:58:51.620885217Z" level=info msg="StopContainer for \"aa7230d955fbb2d6988852320569731ff8a0cd5ed036776bee4a7ed4e658fc7b\" with timeout 300 (s)" Feb 9 09:58:51.621492 env[1448]: time="2024-02-09T09:58:51.621470182Z" level=info msg="Stop container \"aa7230d955fbb2d6988852320569731ff8a0cd5ed036776bee4a7ed4e658fc7b\" with signal terminated" Feb 9 09:58:51.649588 kubelet[2631]: I0209 09:58:51.649548 2631 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-typha-6994f59946-fhpjt" podStartSLOduration=-9.223372032205362e+09 pod.CreationTimestamp="2024-02-09 09:58:47 +0000 UTC" firstStartedPulling="2024-02-09 09:58:48.028294552 +0000 UTC m=+32.892478838" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 09:58:51.634003749 +0000 UTC m=+36.498188115" watchObservedRunningTime="2024-02-09 09:58:51.649414264 +0000 UTC m=+36.513598590" Feb 9 09:58:51.661421 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-aa7230d955fbb2d6988852320569731ff8a0cd5ed036776bee4a7ed4e658fc7b-rootfs.mount: Deactivated successfully. Feb 9 09:58:51.707000 audit[3358]: NETFILTER_CFG table=filter:115 family=2 entries=13 op=nft_register_rule pid=3358 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 09:58:51.707000 audit[3358]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=4028 a0=3 a1=fffffd1a6e80 a2=0 a3=ffff8cc686c0 items=0 ppid=2831 pid=3358 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:58:51.707000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 09:58:51.708000 audit[3358]: NETFILTER_CFG table=nat:116 family=2 entries=27 op=nft_register_chain pid=3358 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 09:58:51.708000 audit[3358]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=8836 a0=3 a1=fffffd1a6e80 a2=0 a3=ffff8cc686c0 items=0 ppid=2831 pid=3358 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:58:51.708000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 09:58:53.539480 kubelet[2631]: E0209 09:58:53.539449 2631 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-s2mw5" podUID=99a671b0-f7e8-4988-baf5-8e0d96bfea44 Feb 9 09:58:55.539764 kubelet[2631]: E0209 09:58:55.539727 2631 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-s2mw5" podUID=99a671b0-f7e8-4988-baf5-8e0d96bfea44 Feb 9 09:58:58.078343 kubelet[2631]: E0209 09:58:57.540593 2631 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-s2mw5" podUID=99a671b0-f7e8-4988-baf5-8e0d96bfea44 Feb 9 09:58:58.078644 env[1448]: time="2024-02-09T09:58:55.642842613Z" level=error msg="collecting metrics for aa7230d955fbb2d6988852320569731ff8a0cd5ed036776bee4a7ed4e658fc7b" error="cgroups: cgroup deleted: unknown" Feb 9 09:58:58.176144 env[1448]: time="2024-02-09T09:58:58.176096548Z" level=info msg="shim disconnected" id=aa7230d955fbb2d6988852320569731ff8a0cd5ed036776bee4a7ed4e658fc7b Feb 9 09:58:58.176144 env[1448]: time="2024-02-09T09:58:58.176136908Z" level=warning msg="cleaning up after shim disconnected" id=aa7230d955fbb2d6988852320569731ff8a0cd5ed036776bee4a7ed4e658fc7b namespace=k8s.io Feb 9 09:58:58.176144 env[1448]: time="2024-02-09T09:58:58.176146828Z" level=info msg="cleaning up dead shim" Feb 9 09:58:58.182717 env[1448]: time="2024-02-09T09:58:58.182668809Z" level=warning msg="cleanup warnings time=\"2024-02-09T09:58:58Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3359 runtime=io.containerd.runc.v2\n" Feb 9 09:58:58.186627 env[1448]: time="2024-02-09T09:58:58.186587485Z" level=info msg="StopContainer for \"aa7230d955fbb2d6988852320569731ff8a0cd5ed036776bee4a7ed4e658fc7b\" returns successfully" Feb 9 09:58:58.187169 env[1448]: time="2024-02-09T09:58:58.187142170Z" level=info msg="StopPodSandbox for \"4716c7d9f1325f273cf3f9f437a405894f323375ca836f3ac49de65a7f439299\"" Feb 9 09:58:58.187250 env[1448]: time="2024-02-09T09:58:58.187200890Z" level=info msg="Container to stop \"aa7230d955fbb2d6988852320569731ff8a0cd5ed036776bee4a7ed4e658fc7b\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 09:58:58.189087 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-4716c7d9f1325f273cf3f9f437a405894f323375ca836f3ac49de65a7f439299-shm.mount: Deactivated successfully. Feb 9 09:58:58.212968 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4716c7d9f1325f273cf3f9f437a405894f323375ca836f3ac49de65a7f439299-rootfs.mount: Deactivated successfully. Feb 9 09:58:58.374454 env[1448]: time="2024-02-09T09:58:58.373861535Z" level=info msg="shim disconnected" id=4716c7d9f1325f273cf3f9f437a405894f323375ca836f3ac49de65a7f439299 Feb 9 09:58:58.374454 env[1448]: time="2024-02-09T09:58:58.373912415Z" level=warning msg="cleaning up after shim disconnected" id=4716c7d9f1325f273cf3f9f437a405894f323375ca836f3ac49de65a7f439299 namespace=k8s.io Feb 9 09:58:58.374454 env[1448]: time="2024-02-09T09:58:58.373930176Z" level=info msg="cleaning up dead shim" Feb 9 09:58:58.380719 env[1448]: time="2024-02-09T09:58:58.380677998Z" level=warning msg="cleanup warnings time=\"2024-02-09T09:58:58Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3394 runtime=io.containerd.runc.v2\n" Feb 9 09:58:58.420011 env[1448]: time="2024-02-09T09:58:58.419960041Z" level=info msg="TearDown network for sandbox \"4716c7d9f1325f273cf3f9f437a405894f323375ca836f3ac49de65a7f439299\" successfully" Feb 9 09:58:58.420011 env[1448]: time="2024-02-09T09:58:58.420000001Z" level=info msg="StopPodSandbox for \"4716c7d9f1325f273cf3f9f437a405894f323375ca836f3ac49de65a7f439299\" returns successfully" Feb 9 09:58:58.507221 kubelet[2631]: E0209 09:58:58.507077 2631 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 09:58:58.507221 kubelet[2631]: W0209 09:58:58.507095 2631 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 09:58:58.507221 kubelet[2631]: E0209 09:58:58.507113 2631 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 09:58:58.507221 kubelet[2631]: I0209 09:58:58.507146 2631 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c74a0ab7-73f9-4f41-bc7c-7f16d8a4958c-tigera-ca-bundle\") pod \"c74a0ab7-73f9-4f41-bc7c-7f16d8a4958c\" (UID: \"c74a0ab7-73f9-4f41-bc7c-7f16d8a4958c\") " Feb 9 09:58:58.507751 kubelet[2631]: E0209 09:58:58.507557 2631 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 09:58:58.507751 kubelet[2631]: W0209 09:58:58.507570 2631 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 09:58:58.507751 kubelet[2631]: E0209 09:58:58.507583 2631 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 09:58:58.507751 kubelet[2631]: I0209 09:58:58.507605 2631 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/c74a0ab7-73f9-4f41-bc7c-7f16d8a4958c-typha-certs\") pod \"c74a0ab7-73f9-4f41-bc7c-7f16d8a4958c\" (UID: \"c74a0ab7-73f9-4f41-bc7c-7f16d8a4958c\") " Feb 9 09:58:58.508217 kubelet[2631]: E0209 09:58:58.507949 2631 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 09:58:58.508217 kubelet[2631]: W0209 09:58:58.507961 2631 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 09:58:58.508217 kubelet[2631]: E0209 09:58:58.507978 2631 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 09:58:58.508217 kubelet[2631]: I0209 09:58:58.508002 2631 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"kube-api-access-697xx\" (UniqueName: \"kubernetes.io/projected/c74a0ab7-73f9-4f41-bc7c-7f16d8a4958c-kube-api-access-697xx\") pod \"c74a0ab7-73f9-4f41-bc7c-7f16d8a4958c\" (UID: \"c74a0ab7-73f9-4f41-bc7c-7f16d8a4958c\") " Feb 9 09:58:58.508629 kubelet[2631]: E0209 09:58:58.508418 2631 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 09:58:58.508629 kubelet[2631]: W0209 09:58:58.508430 2631 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 09:58:58.508629 kubelet[2631]: E0209 09:58:58.508455 2631 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 09:58:58.510291 kubelet[2631]: E0209 09:58:58.509457 2631 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 09:58:58.510291 kubelet[2631]: W0209 09:58:58.509476 2631 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 09:58:58.510291 kubelet[2631]: E0209 09:58:58.509492 2631 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 09:58:58.512943 systemd[1]: var-lib-kubelet-pods-c74a0ab7\x2d73f9\x2d4f41\x2dbc7c\x2d7f16d8a4958c-volume\x2dsubpaths-tigera\x2dca\x2dbundle-calico\x2dtypha-1.mount: Deactivated successfully. Feb 9 09:58:58.514899 systemd[1]: var-lib-kubelet-pods-c74a0ab7\x2d73f9\x2d4f41\x2dbc7c\x2d7f16d8a4958c-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d697xx.mount: Deactivated successfully. Feb 9 09:58:58.516207 kubelet[2631]: I0209 09:58:58.516184 2631 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c74a0ab7-73f9-4f41-bc7c-7f16d8a4958c-kube-api-access-697xx" (OuterVolumeSpecName: "kube-api-access-697xx") pod "c74a0ab7-73f9-4f41-bc7c-7f16d8a4958c" (UID: "c74a0ab7-73f9-4f41-bc7c-7f16d8a4958c"). InnerVolumeSpecName "kube-api-access-697xx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 9 09:58:58.516465 kubelet[2631]: E0209 09:58:58.516453 2631 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 09:58:58.516607 kubelet[2631]: W0209 09:58:58.516591 2631 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 09:58:58.516699 kubelet[2631]: E0209 09:58:58.516687 2631 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 09:58:58.516869 kubelet[2631]: W0209 09:58:58.516858 2631 empty_dir.go:525] Warning: Failed to clear quota on /var/lib/kubelet/pods/c74a0ab7-73f9-4f41-bc7c-7f16d8a4958c/volumes/kubernetes.io~configmap/tigera-ca-bundle: clearQuota called, but quotas disabled Feb 9 09:58:58.517124 kubelet[2631]: I0209 09:58:58.517108 2631 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c74a0ab7-73f9-4f41-bc7c-7f16d8a4958c-tigera-ca-bundle" (OuterVolumeSpecName: "tigera-ca-bundle") pod "c74a0ab7-73f9-4f41-bc7c-7f16d8a4958c" (UID: "c74a0ab7-73f9-4f41-bc7c-7f16d8a4958c"). InnerVolumeSpecName "tigera-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 9 09:58:58.519921 systemd[1]: var-lib-kubelet-pods-c74a0ab7\x2d73f9\x2d4f41\x2dbc7c\x2d7f16d8a4958c-volumes-kubernetes.io\x7esecret-typha\x2dcerts.mount: Deactivated successfully. Feb 9 09:58:58.520830 kubelet[2631]: I0209 09:58:58.520587 2631 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c74a0ab7-73f9-4f41-bc7c-7f16d8a4958c-typha-certs" (OuterVolumeSpecName: "typha-certs") pod "c74a0ab7-73f9-4f41-bc7c-7f16d8a4958c" (UID: "c74a0ab7-73f9-4f41-bc7c-7f16d8a4958c"). InnerVolumeSpecName "typha-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 9 09:58:58.608804 kubelet[2631]: I0209 09:58:58.608770 2631 reconciler_common.go:295] "Volume detached for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c74a0ab7-73f9-4f41-bc7c-7f16d8a4958c-tigera-ca-bundle\") on node \"ci-3510.3.2-a-d10cdd880c\" DevicePath \"\"" Feb 9 09:58:58.608804 kubelet[2631]: I0209 09:58:58.608802 2631 reconciler_common.go:295] "Volume detached for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/c74a0ab7-73f9-4f41-bc7c-7f16d8a4958c-typha-certs\") on node \"ci-3510.3.2-a-d10cdd880c\" DevicePath \"\"" Feb 9 09:58:58.608804 kubelet[2631]: I0209 09:58:58.608814 2631 reconciler_common.go:295] "Volume detached for volume \"kube-api-access-697xx\" (UniqueName: \"kubernetes.io/projected/c74a0ab7-73f9-4f41-bc7c-7f16d8a4958c-kube-api-access-697xx\") on node \"ci-3510.3.2-a-d10cdd880c\" DevicePath \"\"" Feb 9 09:58:58.633546 kubelet[2631]: I0209 09:58:58.632252 2631 scope.go:115] "RemoveContainer" containerID="aa7230d955fbb2d6988852320569731ff8a0cd5ed036776bee4a7ed4e658fc7b" Feb 9 09:58:58.642802 env[1448]: time="2024-02-09T09:58:58.639113266Z" level=info msg="RemoveContainer for \"aa7230d955fbb2d6988852320569731ff8a0cd5ed036776bee4a7ed4e658fc7b\"" Feb 9 09:58:58.660688 kubelet[2631]: I0209 09:58:58.660661 2631 topology_manager.go:210] "Topology Admit Handler" Feb 9 09:58:58.660894 kubelet[2631]: E0209 09:58:58.660881 2631 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="c74a0ab7-73f9-4f41-bc7c-7f16d8a4958c" containerName="calico-typha" Feb 9 09:58:58.661028 kubelet[2631]: I0209 09:58:58.661017 2631 memory_manager.go:346] "RemoveStaleState removing state" podUID="c74a0ab7-73f9-4f41-bc7c-7f16d8a4958c" containerName="calico-typha" Feb 9 09:58:58.692132 kubelet[2631]: E0209 09:58:58.692104 2631 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 09:58:58.692271 kubelet[2631]: W0209 09:58:58.692256 2631 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 09:58:58.692388 kubelet[2631]: E0209 09:58:58.692377 2631 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 09:58:58.692652 kubelet[2631]: E0209 09:58:58.692639 2631 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 09:58:58.692735 kubelet[2631]: W0209 09:58:58.692723 2631 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 09:58:58.692796 kubelet[2631]: E0209 09:58:58.692786 2631 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 09:58:58.693040 kubelet[2631]: E0209 09:58:58.693029 2631 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 09:58:58.693116 kubelet[2631]: W0209 09:58:58.693104 2631 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 09:58:58.693188 kubelet[2631]: E0209 09:58:58.693179 2631 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 09:58:58.693453 kubelet[2631]: E0209 09:58:58.693443 2631 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 09:58:58.693538 kubelet[2631]: W0209 09:58:58.693526 2631 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 09:58:58.693629 kubelet[2631]: E0209 09:58:58.693611 2631 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 09:58:58.693878 kubelet[2631]: E0209 09:58:58.693865 2631 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 09:58:58.693967 kubelet[2631]: W0209 09:58:58.693956 2631 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 09:58:58.694027 kubelet[2631]: E0209 09:58:58.694018 2631 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 09:58:58.694250 kubelet[2631]: E0209 09:58:58.694239 2631 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 09:58:58.694349 kubelet[2631]: W0209 09:58:58.694338 2631 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 09:58:58.694415 kubelet[2631]: E0209 09:58:58.694407 2631 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 09:58:58.694653 kubelet[2631]: E0209 09:58:58.694643 2631 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 09:58:58.694732 kubelet[2631]: W0209 09:58:58.694721 2631 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 09:58:58.694790 kubelet[2631]: E0209 09:58:58.694780 2631 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 09:58:58.694993 kubelet[2631]: E0209 09:58:58.694970 2631 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 09:58:58.695068 kubelet[2631]: W0209 09:58:58.695057 2631 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 09:58:58.695130 kubelet[2631]: E0209 09:58:58.695122 2631 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 09:58:58.696127 kubelet[2631]: E0209 09:58:58.696114 2631 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 09:58:58.696226 kubelet[2631]: W0209 09:58:58.696215 2631 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 09:58:58.696286 kubelet[2631]: E0209 09:58:58.696278 2631 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 09:58:58.703000 audit[3466]: NETFILTER_CFG table=filter:117 family=2 entries=13 op=nft_register_rule pid=3466 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 09:58:58.709841 kernel: kauditd_printk_skb: 20 callbacks suppressed Feb 9 09:58:58.709898 kernel: audit: type=1325 audit(1707472738.703:290): table=filter:117 family=2 entries=13 op=nft_register_rule pid=3466 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 09:58:58.710355 kubelet[2631]: E0209 09:58:58.710337 2631 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 09:58:58.710464 kubelet[2631]: W0209 09:58:58.710450 2631 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 09:58:58.710541 kubelet[2631]: E0209 09:58:58.710531 2631 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 09:58:58.710626 kubelet[2631]: I0209 09:58:58.710613 2631 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4bf23410-5925-4f29-af91-e09a41637de1-tigera-ca-bundle\") pod \"calico-typha-849d9dbb77-xl554\" (UID: \"4bf23410-5925-4f29-af91-e09a41637de1\") " pod="calico-system/calico-typha-849d9dbb77-xl554" Feb 9 09:58:58.703000 audit[3466]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=4732 a0=3 a1=ffffd73409e0 a2=0 a3=ffff9e3d86c0 items=0 ppid=2831 pid=3466 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:58:58.724678 kubelet[2631]: E0209 09:58:58.724658 2631 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 09:58:58.724805 kubelet[2631]: W0209 09:58:58.724788 2631 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 09:58:58.724911 kubelet[2631]: E0209 09:58:58.724899 2631 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 09:58:58.725103 kubelet[2631]: I0209 09:58:58.725091 2631 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hrsvh\" (UniqueName: \"kubernetes.io/projected/4bf23410-5925-4f29-af91-e09a41637de1-kube-api-access-hrsvh\") pod \"calico-typha-849d9dbb77-xl554\" (UID: \"4bf23410-5925-4f29-af91-e09a41637de1\") " pod="calico-system/calico-typha-849d9dbb77-xl554" Feb 9 09:58:58.738531 kubelet[2631]: E0209 09:58:58.738506 2631 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 09:58:58.738698 kubelet[2631]: W0209 09:58:58.738682 2631 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 09:58:58.738775 kubelet[2631]: E0209 09:58:58.738764 2631 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 09:58:58.744394 kubelet[2631]: E0209 09:58:58.744375 2631 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 09:58:58.744533 kubelet[2631]: W0209 09:58:58.744518 2631 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 09:58:58.744607 kubelet[2631]: E0209 09:58:58.744597 2631 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 09:58:58.747599 kubelet[2631]: E0209 09:58:58.747577 2631 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 09:58:58.747729 kubelet[2631]: W0209 09:58:58.747716 2631 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 09:58:58.747801 kubelet[2631]: E0209 09:58:58.747791 2631 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 09:58:58.749448 kubelet[2631]: E0209 09:58:58.749432 2631 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 09:58:58.749550 kubelet[2631]: W0209 09:58:58.749537 2631 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 09:58:58.749618 kubelet[2631]: E0209 09:58:58.749608 2631 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 09:58:58.751284 kernel: audit: type=1300 audit(1707472738.703:290): arch=c00000b7 syscall=211 success=yes exit=4732 a0=3 a1=ffffd73409e0 a2=0 a3=ffff9e3d86c0 items=0 ppid=2831 pid=3466 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:58:58.703000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 09:58:58.751671 kubelet[2631]: E0209 09:58:58.751654 2631 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 09:58:58.751756 kubelet[2631]: W0209 09:58:58.751740 2631 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 09:58:58.751829 kubelet[2631]: E0209 09:58:58.751819 2631 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 09:58:58.751919 kubelet[2631]: I0209 09:58:58.751909 2631 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/4bf23410-5925-4f29-af91-e09a41637de1-typha-certs\") pod \"calico-typha-849d9dbb77-xl554\" (UID: \"4bf23410-5925-4f29-af91-e09a41637de1\") " pod="calico-system/calico-typha-849d9dbb77-xl554" Feb 9 09:58:58.753454 kubelet[2631]: E0209 09:58:58.753438 2631 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 09:58:58.753555 kubelet[2631]: W0209 09:58:58.753541 2631 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 09:58:58.753626 kubelet[2631]: E0209 09:58:58.753613 2631 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 09:58:58.753899 kubelet[2631]: E0209 09:58:58.753886 2631 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 09:58:58.753990 kubelet[2631]: W0209 09:58:58.753977 2631 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 09:58:58.754058 kubelet[2631]: E0209 09:58:58.754048 2631 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 09:58:58.765458 kernel: audit: type=1327 audit(1707472738.703:290): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 09:58:58.708000 audit[3466]: NETFILTER_CFG table=nat:118 family=2 entries=27 op=nft_unregister_chain pid=3466 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 09:58:58.780863 kernel: audit: type=1325 audit(1707472738.708:291): table=nat:118 family=2 entries=27 op=nft_unregister_chain pid=3466 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 09:58:58.708000 audit[3466]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5596 a0=3 a1=ffffd73409e0 a2=0 a3=ffff9e3d86c0 items=0 ppid=2831 pid=3466 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:58:58.809346 kernel: audit: type=1300 audit(1707472738.708:291): arch=c00000b7 syscall=211 success=yes exit=5596 a0=3 a1=ffffd73409e0 a2=0 a3=ffff9e3d86c0 items=0 ppid=2831 pid=3466 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:58:58.708000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 09:58:58.823592 kernel: audit: type=1327 audit(1707472738.708:291): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 09:58:58.841000 audit[3501]: NETFILTER_CFG table=filter:119 family=2 entries=14 op=nft_register_rule pid=3501 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 09:58:58.855261 kubelet[2631]: E0209 09:58:58.855238 2631 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 09:58:58.855447 kubelet[2631]: W0209 09:58:58.855431 2631 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 09:58:58.855524 kubelet[2631]: E0209 09:58:58.855513 2631 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 09:58:58.841000 audit[3501]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=4732 a0=3 a1=ffffc6488b10 a2=0 a3=ffffbb8596c0 items=0 ppid=2831 pid=3501 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:58:58.857902 kubelet[2631]: E0209 09:58:58.857888 2631 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 09:58:58.857986 kubelet[2631]: W0209 09:58:58.857974 2631 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 09:58:58.858058 kubelet[2631]: E0209 09:58:58.858049 2631 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 09:58:58.858407 kubelet[2631]: E0209 09:58:58.858394 2631 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 09:58:58.858500 kubelet[2631]: W0209 09:58:58.858488 2631 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 09:58:58.858567 kubelet[2631]: E0209 09:58:58.858558 2631 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 09:58:58.858834 kubelet[2631]: E0209 09:58:58.858822 2631 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 09:58:58.858918 kubelet[2631]: W0209 09:58:58.858907 2631 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 09:58:58.858988 kubelet[2631]: E0209 09:58:58.858980 2631 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 09:58:58.859244 kubelet[2631]: E0209 09:58:58.859233 2631 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 09:58:58.859345 kubelet[2631]: W0209 09:58:58.859332 2631 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 09:58:58.859424 kubelet[2631]: E0209 09:58:58.859415 2631 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 09:58:58.868475 kubelet[2631]: E0209 09:58:58.868456 2631 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 09:58:58.924743 kernel: audit: type=1325 audit(1707472738.841:292): table=filter:119 family=2 entries=14 op=nft_register_rule pid=3501 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 09:58:58.924825 kernel: audit: type=1300 audit(1707472738.841:292): arch=c00000b7 syscall=211 success=yes exit=4732 a0=3 a1=ffffc6488b10 a2=0 a3=ffffbb8596c0 items=0 ppid=2831 pid=3501 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:58:58.924846 kernel: audit: type=1327 audit(1707472738.841:292): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 09:58:58.924873 kernel: audit: type=1325 audit(1707472738.842:293): table=nat:120 family=2 entries=20 op=nft_register_rule pid=3501 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 09:58:58.841000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 09:58:58.842000 audit[3501]: NETFILTER_CFG table=nat:120 family=2 entries=20 op=nft_register_rule pid=3501 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 09:58:58.842000 audit[3501]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5340 a0=3 a1=ffffc6488b10 a2=0 a3=ffffbb8596c0 items=0 ppid=2831 pid=3501 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:58:58.842000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 09:58:58.925100 kubelet[2631]: W0209 09:58:58.868541 2631 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 09:58:58.925100 kubelet[2631]: E0209 09:58:58.868567 2631 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 09:58:58.925100 kubelet[2631]: E0209 09:58:58.868803 2631 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 09:58:58.925100 kubelet[2631]: W0209 09:58:58.868811 2631 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 09:58:58.925100 kubelet[2631]: E0209 09:58:58.868871 2631 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 09:58:58.925100 kubelet[2631]: E0209 09:58:58.869769 2631 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 09:58:58.925100 kubelet[2631]: W0209 09:58:58.869777 2631 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 09:58:58.925100 kubelet[2631]: E0209 09:58:58.869834 2631 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 09:58:58.925100 kubelet[2631]: E0209 09:58:58.870844 2631 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 09:58:58.925100 kubelet[2631]: W0209 09:58:58.870855 2631 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 09:58:58.925365 kubelet[2631]: E0209 09:58:58.870935 2631 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 09:58:58.925365 kubelet[2631]: E0209 09:58:58.871079 2631 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 09:58:58.925365 kubelet[2631]: W0209 09:58:58.871087 2631 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 09:58:58.925365 kubelet[2631]: E0209 09:58:58.871146 2631 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 09:58:58.925365 kubelet[2631]: E0209 09:58:58.873027 2631 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 09:58:58.925365 kubelet[2631]: W0209 09:58:58.873038 2631 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 09:58:58.925365 kubelet[2631]: E0209 09:58:58.873116 2631 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 09:58:58.925365 kubelet[2631]: E0209 09:58:58.873417 2631 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 09:58:58.925365 kubelet[2631]: W0209 09:58:58.873426 2631 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 09:58:58.925365 kubelet[2631]: E0209 09:58:58.873440 2631 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 09:58:58.925586 kubelet[2631]: E0209 09:58:58.878424 2631 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 09:58:58.925586 kubelet[2631]: W0209 09:58:58.878435 2631 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 09:58:58.925586 kubelet[2631]: E0209 09:58:58.878451 2631 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 09:58:58.925586 kubelet[2631]: E0209 09:58:58.883504 2631 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 09:58:58.925586 kubelet[2631]: W0209 09:58:58.883527 2631 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 09:58:58.925586 kubelet[2631]: E0209 09:58:58.883568 2631 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 09:58:58.925586 kubelet[2631]: E0209 09:58:58.884623 2631 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 09:58:58.925586 kubelet[2631]: W0209 09:58:58.884635 2631 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 09:58:58.925586 kubelet[2631]: E0209 09:58:58.884652 2631 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 09:58:58.927347 kubelet[2631]: E0209 09:58:58.927328 2631 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 09:58:58.929147 kubelet[2631]: W0209 09:58:58.927423 2631 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 09:58:58.929147 kubelet[2631]: E0209 09:58:58.927449 2631 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 09:58:58.941483 kubelet[2631]: E0209 09:58:58.941462 2631 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 09:58:58.941629 kubelet[2631]: W0209 09:58:58.941615 2631 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 09:58:58.941703 kubelet[2631]: E0209 09:58:58.941693 2631 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 09:58:58.941917 kubelet[2631]: E0209 09:58:58.941898 2631 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 09:58:58.941917 kubelet[2631]: W0209 09:58:58.941914 2631 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 09:58:58.942035 kubelet[2631]: E0209 09:58:58.941928 2631 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 09:58:58.964772 env[1448]: time="2024-02-09T09:58:58.964724834Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-849d9dbb77-xl554,Uid:4bf23410-5925-4f29-af91-e09a41637de1,Namespace:calico-system,Attempt:0,}" Feb 9 09:58:59.021533 env[1448]: time="2024-02-09T09:58:59.021484756Z" level=info msg="RemoveContainer for \"aa7230d955fbb2d6988852320569731ff8a0cd5ed036776bee4a7ed4e658fc7b\" returns successfully" Feb 9 09:58:59.022021 kubelet[2631]: I0209 09:58:59.021928 2631 scope.go:115] "RemoveContainer" containerID="aa7230d955fbb2d6988852320569731ff8a0cd5ed036776bee4a7ed4e658fc7b" Feb 9 09:58:59.022217 env[1448]: time="2024-02-09T09:58:59.022129802Z" level=error msg="ContainerStatus for \"aa7230d955fbb2d6988852320569731ff8a0cd5ed036776bee4a7ed4e658fc7b\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"aa7230d955fbb2d6988852320569731ff8a0cd5ed036776bee4a7ed4e658fc7b\": not found" Feb 9 09:58:59.022428 kubelet[2631]: E0209 09:58:59.022374 2631 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"aa7230d955fbb2d6988852320569731ff8a0cd5ed036776bee4a7ed4e658fc7b\": not found" containerID="aa7230d955fbb2d6988852320569731ff8a0cd5ed036776bee4a7ed4e658fc7b" Feb 9 09:58:59.022428 kubelet[2631]: I0209 09:58:59.022410 2631 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:aa7230d955fbb2d6988852320569731ff8a0cd5ed036776bee4a7ed4e658fc7b} err="failed to get container status \"aa7230d955fbb2d6988852320569731ff8a0cd5ed036776bee4a7ed4e658fc7b\": rpc error: code = NotFound desc = an error occurred when try to find container \"aa7230d955fbb2d6988852320569731ff8a0cd5ed036776bee4a7ed4e658fc7b\": not found" Feb 9 09:58:59.456252 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount628445874.mount: Deactivated successfully. Feb 9 09:58:59.540580 kubelet[2631]: E0209 09:58:59.539648 2631 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-s2mw5" podUID=99a671b0-f7e8-4988-baf5-8e0d96bfea44 Feb 9 09:58:59.541442 env[1448]: time="2024-02-09T09:58:59.541164623Z" level=info msg="StopContainer for \"aa7230d955fbb2d6988852320569731ff8a0cd5ed036776bee4a7ed4e658fc7b\" with timeout 1 (s)" Feb 9 09:58:59.541442 env[1448]: time="2024-02-09T09:58:59.541207743Z" level=error msg="StopContainer for \"aa7230d955fbb2d6988852320569731ff8a0cd5ed036776bee4a7ed4e658fc7b\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"aa7230d955fbb2d6988852320569731ff8a0cd5ed036776bee4a7ed4e658fc7b\": not found" Feb 9 09:58:59.543075 kubelet[2631]: E0209 09:58:59.541895 2631 remote_runtime.go:349] "StopContainer from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"aa7230d955fbb2d6988852320569731ff8a0cd5ed036776bee4a7ed4e658fc7b\": not found" containerID="aa7230d955fbb2d6988852320569731ff8a0cd5ed036776bee4a7ed4e658fc7b" Feb 9 09:58:59.543689 env[1448]: time="2024-02-09T09:58:59.543451604Z" level=info msg="StopPodSandbox for \"4716c7d9f1325f273cf3f9f437a405894f323375ca836f3ac49de65a7f439299\"" Feb 9 09:58:59.543689 env[1448]: time="2024-02-09T09:58:59.543537925Z" level=info msg="TearDown network for sandbox \"4716c7d9f1325f273cf3f9f437a405894f323375ca836f3ac49de65a7f439299\" successfully" Feb 9 09:58:59.543689 env[1448]: time="2024-02-09T09:58:59.543569205Z" level=info msg="StopPodSandbox for \"4716c7d9f1325f273cf3f9f437a405894f323375ca836f3ac49de65a7f439299\" returns successfully" Feb 9 09:58:59.546497 kubelet[2631]: I0209 09:58:59.544454 2631 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=c74a0ab7-73f9-4f41-bc7c-7f16d8a4958c path="/var/lib/kubelet/pods/c74a0ab7-73f9-4f41-bc7c-7f16d8a4958c/volumes" Feb 9 09:58:59.778936 env[1448]: time="2024-02-09T09:58:59.778866274Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 09:58:59.778936 env[1448]: time="2024-02-09T09:58:59.778908554Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 09:58:59.779167 env[1448]: time="2024-02-09T09:58:59.778918754Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 09:58:59.779452 env[1448]: time="2024-02-09T09:58:59.779296918Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/3d919d087db89bc99e3f90b7a4630243b8796b14f7a36a0f61c44cdc5cbf2470 pid=3531 runtime=io.containerd.runc.v2 Feb 9 09:58:59.821207 env[1448]: time="2024-02-09T09:58:59.821158220Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-849d9dbb77-xl554,Uid:4bf23410-5925-4f29-af91-e09a41637de1,Namespace:calico-system,Attempt:0,} returns sandbox id \"3d919d087db89bc99e3f90b7a4630243b8796b14f7a36a0f61c44cdc5cbf2470\"" Feb 9 09:58:59.829494 env[1448]: time="2024-02-09T09:58:59.829449456Z" level=info msg="CreateContainer within sandbox \"3d919d087db89bc99e3f90b7a4630243b8796b14f7a36a0f61c44cdc5cbf2470\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Feb 9 09:59:00.281397 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3635519700.mount: Deactivated successfully. Feb 9 09:59:00.612547 env[1448]: time="2024-02-09T09:59:00.612418785Z" level=info msg="CreateContainer within sandbox \"3d919d087db89bc99e3f90b7a4630243b8796b14f7a36a0f61c44cdc5cbf2470\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"a3b544bf04636b16a4283eb5ca02b54b2c87f168ed856c2d6111cb0468a8a61c\"" Feb 9 09:59:00.614871 env[1448]: time="2024-02-09T09:59:00.613109671Z" level=info msg="StartContainer for \"a3b544bf04636b16a4283eb5ca02b54b2c87f168ed856c2d6111cb0468a8a61c\"" Feb 9 09:59:00.678723 env[1448]: time="2024-02-09T09:59:00.678664663Z" level=info msg="StartContainer for \"a3b544bf04636b16a4283eb5ca02b54b2c87f168ed856c2d6111cb0468a8a61c\" returns successfully" Feb 9 09:59:00.763280 update_engine[1431]: I0209 09:59:00.762596 1431 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Feb 9 09:59:00.763280 update_engine[1431]: I0209 09:59:00.762627 1431 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Feb 9 09:59:00.763280 update_engine[1431]: I0209 09:59:00.762741 1431 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Feb 9 09:59:00.763280 update_engine[1431]: I0209 09:59:00.763062 1431 omaha_request_params.cc:62] Current group set to lts Feb 9 09:59:00.763280 update_engine[1431]: I0209 09:59:00.763147 1431 update_attempter.cc:499] Already updated boot flags. Skipping. Feb 9 09:59:00.763280 update_engine[1431]: I0209 09:59:00.763151 1431 update_attempter.cc:643] Scheduling an action processor start. Feb 9 09:59:00.763280 update_engine[1431]: I0209 09:59:00.763166 1431 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Feb 9 09:59:00.763280 update_engine[1431]: I0209 09:59:00.763186 1431 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Feb 9 09:59:00.763808 locksmithd[1502]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Feb 9 09:59:00.804332 update_engine[1431]: I0209 09:59:00.804275 1431 omaha_request_action.cc:270] Posting an Omaha request to disabled Feb 9 09:59:00.804332 update_engine[1431]: I0209 09:59:00.804316 1431 omaha_request_action.cc:271] Request: Feb 9 09:59:00.804332 update_engine[1431]: Feb 9 09:59:00.804332 update_engine[1431]: Feb 9 09:59:00.804332 update_engine[1431]: Feb 9 09:59:00.804332 update_engine[1431]: Feb 9 09:59:00.804332 update_engine[1431]: Feb 9 09:59:00.804332 update_engine[1431]: Feb 9 09:59:00.804332 update_engine[1431]: Feb 9 09:59:00.804332 update_engine[1431]: Feb 9 09:59:00.804332 update_engine[1431]: I0209 09:59:00.804322 1431 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Feb 9 09:59:00.805157 update_engine[1431]: I0209 09:59:00.805130 1431 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Feb 9 09:59:00.805365 update_engine[1431]: I0209 09:59:00.805348 1431 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Feb 9 09:59:00.874881 update_engine[1431]: E0209 09:59:00.874197 1431 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Feb 9 09:59:00.874881 update_engine[1431]: I0209 09:59:00.874321 1431 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Feb 9 09:59:01.076424 env[1448]: time="2024-02-09T09:59:01.076377928Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:59:01.168200 env[1448]: time="2024-02-09T09:59:01.167864105Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:cbddd33ed55a4a5c129e8f09945d426860425b9778d9402efe7bcefea7990a57,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:59:01.215193 env[1448]: time="2024-02-09T09:59:01.215137408Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:59:01.263445 env[1448]: time="2024-02-09T09:59:01.263400359Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:b05edbd1f80db4ada229e6001a666a7dd36bb6ab617143684fb3d28abfc4b71e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:59:01.263756 env[1448]: time="2024-02-09T09:59:01.263726002Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.27.0\" returns image reference \"sha256:cbddd33ed55a4a5c129e8f09945d426860425b9778d9402efe7bcefea7990a57\"" Feb 9 09:59:01.267571 env[1448]: time="2024-02-09T09:59:01.267528316Z" level=info msg="CreateContainer within sandbox \"9498948ab85deaf01e9691c3fb30c9aa7e2619d3ed0078c51cfae97c1f274924\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Feb 9 09:59:01.480217 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount707522100.mount: Deactivated successfully. Feb 9 09:59:01.539265 kubelet[2631]: E0209 09:59:01.539228 2631 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-s2mw5" podUID=99a671b0-f7e8-4988-baf5-8e0d96bfea44 Feb 9 09:59:01.629392 env[1448]: time="2024-02-09T09:59:01.629339668Z" level=info msg="CreateContainer within sandbox \"9498948ab85deaf01e9691c3fb30c9aa7e2619d3ed0078c51cfae97c1f274924\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"89149ba37d5ba1b7c013e9784cd8ff5c19e7705d4db011705c3c3b98a640d802\"" Feb 9 09:59:01.631627 env[1448]: time="2024-02-09T09:59:01.631591808Z" level=info msg="StartContainer for \"89149ba37d5ba1b7c013e9784cd8ff5c19e7705d4db011705c3c3b98a640d802\"" Feb 9 09:59:01.655353 kubelet[2631]: I0209 09:59:01.655324 2631 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-typha-849d9dbb77-xl554" podStartSLOduration=13.655270819 pod.CreationTimestamp="2024-02-09 09:58:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 09:59:01.654788375 +0000 UTC m=+46.518972701" watchObservedRunningTime="2024-02-09 09:59:01.655270819 +0000 UTC m=+46.519455145" Feb 9 09:59:01.725960 env[1448]: time="2024-02-09T09:59:01.725745049Z" level=info msg="StartContainer for \"89149ba37d5ba1b7c013e9784cd8ff5c19e7705d4db011705c3c3b98a640d802\" returns successfully" Feb 9 09:59:01.729487 kubelet[2631]: E0209 09:59:01.729459 2631 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 09:59:01.729487 kubelet[2631]: W0209 09:59:01.729487 2631 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 09:59:01.729671 kubelet[2631]: E0209 09:59:01.729508 2631 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 09:59:01.729671 kubelet[2631]: E0209 09:59:01.729669 2631 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 09:59:01.729750 kubelet[2631]: W0209 09:59:01.729678 2631 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 09:59:01.729750 kubelet[2631]: E0209 09:59:01.729689 2631 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 09:59:01.729817 kubelet[2631]: E0209 09:59:01.729802 2631 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 09:59:01.729849 kubelet[2631]: W0209 09:59:01.729817 2631 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 09:59:01.729849 kubelet[2631]: E0209 09:59:01.729827 2631 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 09:59:01.730002 kubelet[2631]: E0209 09:59:01.729988 2631 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 09:59:01.730002 kubelet[2631]: W0209 09:59:01.730001 2631 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 09:59:01.730082 kubelet[2631]: E0209 09:59:01.730011 2631 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 09:59:01.730150 kubelet[2631]: E0209 09:59:01.730138 2631 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 09:59:01.730150 kubelet[2631]: W0209 09:59:01.730149 2631 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 09:59:01.730219 kubelet[2631]: E0209 09:59:01.730158 2631 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 09:59:01.730298 kubelet[2631]: E0209 09:59:01.730284 2631 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 09:59:01.730298 kubelet[2631]: W0209 09:59:01.730295 2631 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 09:59:01.730397 kubelet[2631]: E0209 09:59:01.730332 2631 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 09:59:01.730529 kubelet[2631]: E0209 09:59:01.730516 2631 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 09:59:01.730570 kubelet[2631]: W0209 09:59:01.730529 2631 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 09:59:01.730570 kubelet[2631]: E0209 09:59:01.730541 2631 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 09:59:01.730678 kubelet[2631]: E0209 09:59:01.730664 2631 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 09:59:01.730678 kubelet[2631]: W0209 09:59:01.730675 2631 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 09:59:01.730747 kubelet[2631]: E0209 09:59:01.730684 2631 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 09:59:01.730820 kubelet[2631]: E0209 09:59:01.730800 2631 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 09:59:01.730820 kubelet[2631]: W0209 09:59:01.730819 2631 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 09:59:01.730881 kubelet[2631]: E0209 09:59:01.730828 2631 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 09:59:01.730979 kubelet[2631]: E0209 09:59:01.730966 2631 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 09:59:01.730979 kubelet[2631]: W0209 09:59:01.730978 2631 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 09:59:01.731043 kubelet[2631]: E0209 09:59:01.730988 2631 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 09:59:01.731128 kubelet[2631]: E0209 09:59:01.731115 2631 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 09:59:01.731128 kubelet[2631]: W0209 09:59:01.731127 2631 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 09:59:01.731193 kubelet[2631]: E0209 09:59:01.731137 2631 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 09:59:01.731272 kubelet[2631]: E0209 09:59:01.731253 2631 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 09:59:01.731272 kubelet[2631]: W0209 09:59:01.731272 2631 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 09:59:01.731368 kubelet[2631]: E0209 09:59:01.731281 2631 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 09:59:01.748000 audit[3679]: NETFILTER_CFG table=filter:121 family=2 entries=13 op=nft_register_rule pid=3679 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 09:59:01.748000 audit[3679]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=4028 a0=3 a1=fffff179c5a0 a2=0 a3=ffff887c56c0 items=0 ppid=2831 pid=3679 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:59:01.748000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 09:59:01.749000 audit[3679]: NETFILTER_CFG table=nat:122 family=2 entries=27 op=nft_register_chain pid=3679 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 09:59:01.749000 audit[3679]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=8836 a0=3 a1=fffff179c5a0 a2=0 a3=ffff887c56c0 items=0 ppid=2831 pid=3679 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:59:01.749000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 09:59:02.189454 systemd[1]: run-containerd-runc-k8s.io-89149ba37d5ba1b7c013e9784cd8ff5c19e7705d4db011705c3c3b98a640d802-runc.UIPcgM.mount: Deactivated successfully. Feb 9 09:59:02.189590 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-89149ba37d5ba1b7c013e9784cd8ff5c19e7705d4db011705c3c3b98a640d802-rootfs.mount: Deactivated successfully. Feb 9 09:59:02.646363 env[1448]: time="2024-02-09T09:59:02.646218690Z" level=info msg="StopContainer for \"89149ba37d5ba1b7c013e9784cd8ff5c19e7705d4db011705c3c3b98a640d802\" with timeout 5 (s)" Feb 9 09:59:03.539493 kubelet[2631]: E0209 09:59:03.539463 2631 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-s2mw5" podUID=99a671b0-f7e8-4988-baf5-8e0d96bfea44 Feb 9 09:59:04.646920 env[1448]: time="2024-02-09T09:59:04.646841000Z" level=error msg="get state for 89149ba37d5ba1b7c013e9784cd8ff5c19e7705d4db011705c3c3b98a640d802" error="context deadline exceeded: unknown" Feb 9 09:59:04.646920 env[1448]: time="2024-02-09T09:59:04.646907881Z" level=warning msg="unknown status" status=0 Feb 9 09:59:04.647518 env[1448]: time="2024-02-09T09:59:04.646942521Z" level=info msg="Stop container \"89149ba37d5ba1b7c013e9784cd8ff5c19e7705d4db011705c3c3b98a640d802\" with signal terminated" Feb 9 09:59:04.771896 env[1448]: time="2024-02-09T09:59:04.771845722Z" level=info msg="shim disconnected" id=89149ba37d5ba1b7c013e9784cd8ff5c19e7705d4db011705c3c3b98a640d802 Feb 9 09:59:04.772031 env[1448]: time="2024-02-09T09:59:04.771912962Z" level=warning msg="cleaning up after shim disconnected" id=89149ba37d5ba1b7c013e9784cd8ff5c19e7705d4db011705c3c3b98a640d802 namespace=k8s.io Feb 9 09:59:04.772031 env[1448]: time="2024-02-09T09:59:04.771925563Z" level=info msg="cleaning up dead shim" Feb 9 09:59:04.779424 env[1448]: time="2024-02-09T09:59:04.779375147Z" level=warning msg="cleanup warnings time=\"2024-02-09T09:59:04Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3699 runtime=io.containerd.runc.v2\n" Feb 9 09:59:04.813152 env[1448]: time="2024-02-09T09:59:04.813104359Z" level=info msg="StopContainer for \"89149ba37d5ba1b7c013e9784cd8ff5c19e7705d4db011705c3c3b98a640d802\" returns successfully" Feb 9 09:59:04.813642 env[1448]: time="2024-02-09T09:59:04.813615723Z" level=info msg="StopPodSandbox for \"9498948ab85deaf01e9691c3fb30c9aa7e2619d3ed0078c51cfae97c1f274924\"" Feb 9 09:59:04.813732 env[1448]: time="2024-02-09T09:59:04.813668404Z" level=info msg="Container to stop \"89149ba37d5ba1b7c013e9784cd8ff5c19e7705d4db011705c3c3b98a640d802\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 09:59:04.815990 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-9498948ab85deaf01e9691c3fb30c9aa7e2619d3ed0078c51cfae97c1f274924-shm.mount: Deactivated successfully. Feb 9 09:59:04.836385 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9498948ab85deaf01e9691c3fb30c9aa7e2619d3ed0078c51cfae97c1f274924-rootfs.mount: Deactivated successfully. Feb 9 09:59:05.364672 env[1448]: time="2024-02-09T09:59:05.364597940Z" level=info msg="shim disconnected" id=9498948ab85deaf01e9691c3fb30c9aa7e2619d3ed0078c51cfae97c1f274924 Feb 9 09:59:05.364672 env[1448]: time="2024-02-09T09:59:05.364663381Z" level=warning msg="cleaning up after shim disconnected" id=9498948ab85deaf01e9691c3fb30c9aa7e2619d3ed0078c51cfae97c1f274924 namespace=k8s.io Feb 9 09:59:05.364672 env[1448]: time="2024-02-09T09:59:05.364673181Z" level=info msg="cleaning up dead shim" Feb 9 09:59:05.371938 env[1448]: time="2024-02-09T09:59:05.371890043Z" level=warning msg="cleanup warnings time=\"2024-02-09T09:59:05Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3731 runtime=io.containerd.runc.v2\n" Feb 9 09:59:05.372203 env[1448]: time="2024-02-09T09:59:05.372171125Z" level=info msg="TearDown network for sandbox \"9498948ab85deaf01e9691c3fb30c9aa7e2619d3ed0078c51cfae97c1f274924\" successfully" Feb 9 09:59:05.372203 env[1448]: time="2024-02-09T09:59:05.372198766Z" level=info msg="StopPodSandbox for \"9498948ab85deaf01e9691c3fb30c9aa7e2619d3ed0078c51cfae97c1f274924\" returns successfully" Feb 9 09:59:05.501508 kubelet[2631]: I0209 09:59:05.501461 2631 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d1f0fc46-7e2b-4488-a6cc-bc3b70c5e3da-policysync" (OuterVolumeSpecName: "policysync") pod "d1f0fc46-7e2b-4488-a6cc-bc3b70c5e3da" (UID: "d1f0fc46-7e2b-4488-a6cc-bc3b70c5e3da"). InnerVolumeSpecName "policysync". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 09:59:05.501508 kubelet[2631]: I0209 09:59:05.501480 2631 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/d1f0fc46-7e2b-4488-a6cc-bc3b70c5e3da-policysync\") pod \"d1f0fc46-7e2b-4488-a6cc-bc3b70c5e3da\" (UID: \"d1f0fc46-7e2b-4488-a6cc-bc3b70c5e3da\") " Feb 9 09:59:05.501894 kubelet[2631]: I0209 09:59:05.501541 2631 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/d1f0fc46-7e2b-4488-a6cc-bc3b70c5e3da-var-run-calico\") pod \"d1f0fc46-7e2b-4488-a6cc-bc3b70c5e3da\" (UID: \"d1f0fc46-7e2b-4488-a6cc-bc3b70c5e3da\") " Feb 9 09:59:05.501894 kubelet[2631]: I0209 09:59:05.501584 2631 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d1f0fc46-7e2b-4488-a6cc-bc3b70c5e3da-var-lib-calico" (OuterVolumeSpecName: "var-lib-calico") pod "d1f0fc46-7e2b-4488-a6cc-bc3b70c5e3da" (UID: "d1f0fc46-7e2b-4488-a6cc-bc3b70c5e3da"). InnerVolumeSpecName "var-lib-calico". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 09:59:05.501894 kubelet[2631]: I0209 09:59:05.501601 2631 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d1f0fc46-7e2b-4488-a6cc-bc3b70c5e3da-var-run-calico" (OuterVolumeSpecName: "var-run-calico") pod "d1f0fc46-7e2b-4488-a6cc-bc3b70c5e3da" (UID: "d1f0fc46-7e2b-4488-a6cc-bc3b70c5e3da"). InnerVolumeSpecName "var-run-calico". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 09:59:05.501894 kubelet[2631]: I0209 09:59:05.501616 2631 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/d1f0fc46-7e2b-4488-a6cc-bc3b70c5e3da-var-lib-calico\") pod \"d1f0fc46-7e2b-4488-a6cc-bc3b70c5e3da\" (UID: \"d1f0fc46-7e2b-4488-a6cc-bc3b70c5e3da\") " Feb 9 09:59:05.501894 kubelet[2631]: I0209 09:59:05.501634 2631 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/d1f0fc46-7e2b-4488-a6cc-bc3b70c5e3da-cni-log-dir\") pod \"d1f0fc46-7e2b-4488-a6cc-bc3b70c5e3da\" (UID: \"d1f0fc46-7e2b-4488-a6cc-bc3b70c5e3da\") " Feb 9 09:59:05.502015 kubelet[2631]: I0209 09:59:05.501661 2631 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/d1f0fc46-7e2b-4488-a6cc-bc3b70c5e3da-cni-net-dir\") pod \"d1f0fc46-7e2b-4488-a6cc-bc3b70c5e3da\" (UID: \"d1f0fc46-7e2b-4488-a6cc-bc3b70c5e3da\") " Feb 9 09:59:05.502015 kubelet[2631]: I0209 09:59:05.501679 2631 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d1f0fc46-7e2b-4488-a6cc-bc3b70c5e3da-xtables-lock\") pod \"d1f0fc46-7e2b-4488-a6cc-bc3b70c5e3da\" (UID: \"d1f0fc46-7e2b-4488-a6cc-bc3b70c5e3da\") " Feb 9 09:59:05.502015 kubelet[2631]: I0209 09:59:05.501701 2631 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/d1f0fc46-7e2b-4488-a6cc-bc3b70c5e3da-node-certs\") pod \"d1f0fc46-7e2b-4488-a6cc-bc3b70c5e3da\" (UID: \"d1f0fc46-7e2b-4488-a6cc-bc3b70c5e3da\") " Feb 9 09:59:05.502015 kubelet[2631]: I0209 09:59:05.501717 2631 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/d1f0fc46-7e2b-4488-a6cc-bc3b70c5e3da-cni-bin-dir\") pod \"d1f0fc46-7e2b-4488-a6cc-bc3b70c5e3da\" (UID: \"d1f0fc46-7e2b-4488-a6cc-bc3b70c5e3da\") " Feb 9 09:59:05.502015 kubelet[2631]: I0209 09:59:05.501749 2631 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c2fbq\" (UniqueName: \"kubernetes.io/projected/d1f0fc46-7e2b-4488-a6cc-bc3b70c5e3da-kube-api-access-c2fbq\") pod \"d1f0fc46-7e2b-4488-a6cc-bc3b70c5e3da\" (UID: \"d1f0fc46-7e2b-4488-a6cc-bc3b70c5e3da\") " Feb 9 09:59:05.502015 kubelet[2631]: I0209 09:59:05.501767 2631 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d1f0fc46-7e2b-4488-a6cc-bc3b70c5e3da-lib-modules\") pod \"d1f0fc46-7e2b-4488-a6cc-bc3b70c5e3da\" (UID: \"d1f0fc46-7e2b-4488-a6cc-bc3b70c5e3da\") " Feb 9 09:59:05.502149 kubelet[2631]: I0209 09:59:05.501786 2631 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/d1f0fc46-7e2b-4488-a6cc-bc3b70c5e3da-flexvol-driver-host\") pod \"d1f0fc46-7e2b-4488-a6cc-bc3b70c5e3da\" (UID: \"d1f0fc46-7e2b-4488-a6cc-bc3b70c5e3da\") " Feb 9 09:59:05.502149 kubelet[2631]: I0209 09:59:05.501816 2631 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d1f0fc46-7e2b-4488-a6cc-bc3b70c5e3da-tigera-ca-bundle\") pod \"d1f0fc46-7e2b-4488-a6cc-bc3b70c5e3da\" (UID: \"d1f0fc46-7e2b-4488-a6cc-bc3b70c5e3da\") " Feb 9 09:59:05.502149 kubelet[2631]: I0209 09:59:05.501851 2631 reconciler_common.go:295] "Volume detached for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/d1f0fc46-7e2b-4488-a6cc-bc3b70c5e3da-policysync\") on node \"ci-3510.3.2-a-d10cdd880c\" DevicePath \"\"" Feb 9 09:59:05.502149 kubelet[2631]: I0209 09:59:05.501863 2631 reconciler_common.go:295] "Volume detached for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/d1f0fc46-7e2b-4488-a6cc-bc3b70c5e3da-var-run-calico\") on node \"ci-3510.3.2-a-d10cdd880c\" DevicePath \"\"" Feb 9 09:59:05.502149 kubelet[2631]: I0209 09:59:05.501883 2631 reconciler_common.go:295] "Volume detached for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/d1f0fc46-7e2b-4488-a6cc-bc3b70c5e3da-var-lib-calico\") on node \"ci-3510.3.2-a-d10cdd880c\" DevicePath \"\"" Feb 9 09:59:05.502149 kubelet[2631]: W0209 09:59:05.502056 2631 empty_dir.go:525] Warning: Failed to clear quota on /var/lib/kubelet/pods/d1f0fc46-7e2b-4488-a6cc-bc3b70c5e3da/volumes/kubernetes.io~configmap/tigera-ca-bundle: clearQuota called, but quotas disabled Feb 9 09:59:05.502355 kubelet[2631]: I0209 09:59:05.502290 2631 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d1f0fc46-7e2b-4488-a6cc-bc3b70c5e3da-tigera-ca-bundle" (OuterVolumeSpecName: "tigera-ca-bundle") pod "d1f0fc46-7e2b-4488-a6cc-bc3b70c5e3da" (UID: "d1f0fc46-7e2b-4488-a6cc-bc3b70c5e3da"). InnerVolumeSpecName "tigera-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 9 09:59:05.502389 kubelet[2631]: I0209 09:59:05.502354 2631 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d1f0fc46-7e2b-4488-a6cc-bc3b70c5e3da-cni-log-dir" (OuterVolumeSpecName: "cni-log-dir") pod "d1f0fc46-7e2b-4488-a6cc-bc3b70c5e3da" (UID: "d1f0fc46-7e2b-4488-a6cc-bc3b70c5e3da"). InnerVolumeSpecName "cni-log-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 09:59:05.502389 kubelet[2631]: I0209 09:59:05.502371 2631 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d1f0fc46-7e2b-4488-a6cc-bc3b70c5e3da-cni-net-dir" (OuterVolumeSpecName: "cni-net-dir") pod "d1f0fc46-7e2b-4488-a6cc-bc3b70c5e3da" (UID: "d1f0fc46-7e2b-4488-a6cc-bc3b70c5e3da"). InnerVolumeSpecName "cni-net-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 09:59:05.502436 kubelet[2631]: I0209 09:59:05.502386 2631 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d1f0fc46-7e2b-4488-a6cc-bc3b70c5e3da-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "d1f0fc46-7e2b-4488-a6cc-bc3b70c5e3da" (UID: "d1f0fc46-7e2b-4488-a6cc-bc3b70c5e3da"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 09:59:05.503011 kubelet[2631]: I0209 09:59:05.502649 2631 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d1f0fc46-7e2b-4488-a6cc-bc3b70c5e3da-cni-bin-dir" (OuterVolumeSpecName: "cni-bin-dir") pod "d1f0fc46-7e2b-4488-a6cc-bc3b70c5e3da" (UID: "d1f0fc46-7e2b-4488-a6cc-bc3b70c5e3da"). InnerVolumeSpecName "cni-bin-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 09:59:05.503011 kubelet[2631]: I0209 09:59:05.502685 2631 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d1f0fc46-7e2b-4488-a6cc-bc3b70c5e3da-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "d1f0fc46-7e2b-4488-a6cc-bc3b70c5e3da" (UID: "d1f0fc46-7e2b-4488-a6cc-bc3b70c5e3da"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 09:59:05.503011 kubelet[2631]: I0209 09:59:05.502703 2631 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d1f0fc46-7e2b-4488-a6cc-bc3b70c5e3da-flexvol-driver-host" (OuterVolumeSpecName: "flexvol-driver-host") pod "d1f0fc46-7e2b-4488-a6cc-bc3b70c5e3da" (UID: "d1f0fc46-7e2b-4488-a6cc-bc3b70c5e3da"). InnerVolumeSpecName "flexvol-driver-host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 09:59:05.507874 systemd[1]: var-lib-kubelet-pods-d1f0fc46\x2d7e2b\x2d4488\x2da6cc\x2dbc3b70c5e3da-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dc2fbq.mount: Deactivated successfully. Feb 9 09:59:05.508032 systemd[1]: var-lib-kubelet-pods-d1f0fc46\x2d7e2b\x2d4488\x2da6cc\x2dbc3b70c5e3da-volumes-kubernetes.io\x7esecret-node\x2dcerts.mount: Deactivated successfully. Feb 9 09:59:05.511997 kubelet[2631]: I0209 09:59:05.511969 2631 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d1f0fc46-7e2b-4488-a6cc-bc3b70c5e3da-node-certs" (OuterVolumeSpecName: "node-certs") pod "d1f0fc46-7e2b-4488-a6cc-bc3b70c5e3da" (UID: "d1f0fc46-7e2b-4488-a6cc-bc3b70c5e3da"). InnerVolumeSpecName "node-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 9 09:59:05.512963 kubelet[2631]: I0209 09:59:05.512933 2631 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d1f0fc46-7e2b-4488-a6cc-bc3b70c5e3da-kube-api-access-c2fbq" (OuterVolumeSpecName: "kube-api-access-c2fbq") pod "d1f0fc46-7e2b-4488-a6cc-bc3b70c5e3da" (UID: "d1f0fc46-7e2b-4488-a6cc-bc3b70c5e3da"). InnerVolumeSpecName "kube-api-access-c2fbq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 9 09:59:05.539675 kubelet[2631]: E0209 09:59:05.539642 2631 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-s2mw5" podUID=99a671b0-f7e8-4988-baf5-8e0d96bfea44 Feb 9 09:59:05.602787 kubelet[2631]: I0209 09:59:05.602759 2631 reconciler_common.go:295] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d1f0fc46-7e2b-4488-a6cc-bc3b70c5e3da-xtables-lock\") on node \"ci-3510.3.2-a-d10cdd880c\" DevicePath \"\"" Feb 9 09:59:05.602787 kubelet[2631]: I0209 09:59:05.602785 2631 reconciler_common.go:295] "Volume detached for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/d1f0fc46-7e2b-4488-a6cc-bc3b70c5e3da-cni-log-dir\") on node \"ci-3510.3.2-a-d10cdd880c\" DevicePath \"\"" Feb 9 09:59:05.602787 kubelet[2631]: I0209 09:59:05.602798 2631 reconciler_common.go:295] "Volume detached for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/d1f0fc46-7e2b-4488-a6cc-bc3b70c5e3da-cni-net-dir\") on node \"ci-3510.3.2-a-d10cdd880c\" DevicePath \"\"" Feb 9 09:59:05.602992 kubelet[2631]: I0209 09:59:05.602808 2631 reconciler_common.go:295] "Volume detached for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/d1f0fc46-7e2b-4488-a6cc-bc3b70c5e3da-cni-bin-dir\") on node \"ci-3510.3.2-a-d10cdd880c\" DevicePath \"\"" Feb 9 09:59:05.602992 kubelet[2631]: I0209 09:59:05.602820 2631 reconciler_common.go:295] "Volume detached for volume \"kube-api-access-c2fbq\" (UniqueName: \"kubernetes.io/projected/d1f0fc46-7e2b-4488-a6cc-bc3b70c5e3da-kube-api-access-c2fbq\") on node \"ci-3510.3.2-a-d10cdd880c\" DevicePath \"\"" Feb 9 09:59:05.602992 kubelet[2631]: I0209 09:59:05.602830 2631 reconciler_common.go:295] "Volume detached for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/d1f0fc46-7e2b-4488-a6cc-bc3b70c5e3da-node-certs\") on node \"ci-3510.3.2-a-d10cdd880c\" DevicePath \"\"" Feb 9 09:59:05.602992 kubelet[2631]: I0209 09:59:05.602839 2631 reconciler_common.go:295] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d1f0fc46-7e2b-4488-a6cc-bc3b70c5e3da-lib-modules\") on node \"ci-3510.3.2-a-d10cdd880c\" DevicePath \"\"" Feb 9 09:59:05.602992 kubelet[2631]: I0209 09:59:05.602850 2631 reconciler_common.go:295] "Volume detached for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/d1f0fc46-7e2b-4488-a6cc-bc3b70c5e3da-flexvol-driver-host\") on node \"ci-3510.3.2-a-d10cdd880c\" DevicePath \"\"" Feb 9 09:59:05.602992 kubelet[2631]: I0209 09:59:05.602860 2631 reconciler_common.go:295] "Volume detached for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d1f0fc46-7e2b-4488-a6cc-bc3b70c5e3da-tigera-ca-bundle\") on node \"ci-3510.3.2-a-d10cdd880c\" DevicePath \"\"" Feb 9 09:59:05.652503 kubelet[2631]: I0209 09:59:05.651082 2631 scope.go:115] "RemoveContainer" containerID="89149ba37d5ba1b7c013e9784cd8ff5c19e7705d4db011705c3c3b98a640d802" Feb 9 09:59:05.654863 env[1448]: time="2024-02-09T09:59:05.654821827Z" level=info msg="RemoveContainer for \"89149ba37d5ba1b7c013e9784cd8ff5c19e7705d4db011705c3c3b98a640d802\"" Feb 9 09:59:05.679041 env[1448]: time="2024-02-09T09:59:05.679002114Z" level=info msg="RemoveContainer for \"89149ba37d5ba1b7c013e9784cd8ff5c19e7705d4db011705c3c3b98a640d802\" returns successfully" Feb 9 09:59:05.679426 kubelet[2631]: I0209 09:59:05.679400 2631 scope.go:115] "RemoveContainer" containerID="89149ba37d5ba1b7c013e9784cd8ff5c19e7705d4db011705c3c3b98a640d802" Feb 9 09:59:05.679812 env[1448]: time="2024-02-09T09:59:05.679738360Z" level=error msg="ContainerStatus for \"89149ba37d5ba1b7c013e9784cd8ff5c19e7705d4db011705c3c3b98a640d802\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"89149ba37d5ba1b7c013e9784cd8ff5c19e7705d4db011705c3c3b98a640d802\": not found" Feb 9 09:59:05.680036 kubelet[2631]: E0209 09:59:05.679999 2631 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"89149ba37d5ba1b7c013e9784cd8ff5c19e7705d4db011705c3c3b98a640d802\": not found" containerID="89149ba37d5ba1b7c013e9784cd8ff5c19e7705d4db011705c3c3b98a640d802" Feb 9 09:59:05.680036 kubelet[2631]: I0209 09:59:05.680035 2631 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:89149ba37d5ba1b7c013e9784cd8ff5c19e7705d4db011705c3c3b98a640d802} err="failed to get container status \"89149ba37d5ba1b7c013e9784cd8ff5c19e7705d4db011705c3c3b98a640d802\": rpc error: code = NotFound desc = an error occurred when try to find container \"89149ba37d5ba1b7c013e9784cd8ff5c19e7705d4db011705c3c3b98a640d802\": not found" Feb 9 09:59:05.704471 kubelet[2631]: I0209 09:59:05.704434 2631 topology_manager.go:210] "Topology Admit Handler" Feb 9 09:59:05.704678 kubelet[2631]: E0209 09:59:05.704665 2631 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="d1f0fc46-7e2b-4488-a6cc-bc3b70c5e3da" containerName="flexvol-driver" Feb 9 09:59:05.704781 kubelet[2631]: I0209 09:59:05.704769 2631 memory_manager.go:346] "RemoveStaleState removing state" podUID="d1f0fc46-7e2b-4488-a6cc-bc3b70c5e3da" containerName="flexvol-driver" Feb 9 09:59:05.804119 kubelet[2631]: I0209 09:59:05.804088 2631 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/7be306fa-32e0-450f-8e15-d813f38b592f-policysync\") pod \"calico-node-hxw57\" (UID: \"7be306fa-32e0-450f-8e15-d813f38b592f\") " pod="calico-system/calico-node-hxw57" Feb 9 09:59:05.804340 kubelet[2631]: I0209 09:59:05.804298 2631 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/7be306fa-32e0-450f-8e15-d813f38b592f-node-certs\") pod \"calico-node-hxw57\" (UID: \"7be306fa-32e0-450f-8e15-d813f38b592f\") " pod="calico-system/calico-node-hxw57" Feb 9 09:59:05.804453 kubelet[2631]: I0209 09:59:05.804442 2631 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7be306fa-32e0-450f-8e15-d813f38b592f-lib-modules\") pod \"calico-node-hxw57\" (UID: \"7be306fa-32e0-450f-8e15-d813f38b592f\") " pod="calico-system/calico-node-hxw57" Feb 9 09:59:05.804551 kubelet[2631]: I0209 09:59:05.804541 2631 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7be306fa-32e0-450f-8e15-d813f38b592f-xtables-lock\") pod \"calico-node-hxw57\" (UID: \"7be306fa-32e0-450f-8e15-d813f38b592f\") " pod="calico-system/calico-node-hxw57" Feb 9 09:59:05.804650 kubelet[2631]: I0209 09:59:05.804640 2631 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/7be306fa-32e0-450f-8e15-d813f38b592f-var-lib-calico\") pod \"calico-node-hxw57\" (UID: \"7be306fa-32e0-450f-8e15-d813f38b592f\") " pod="calico-system/calico-node-hxw57" Feb 9 09:59:05.804755 kubelet[2631]: I0209 09:59:05.804746 2631 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/7be306fa-32e0-450f-8e15-d813f38b592f-cni-net-dir\") pod \"calico-node-hxw57\" (UID: \"7be306fa-32e0-450f-8e15-d813f38b592f\") " pod="calico-system/calico-node-hxw57" Feb 9 09:59:05.804872 kubelet[2631]: I0209 09:59:05.804861 2631 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/7be306fa-32e0-450f-8e15-d813f38b592f-cni-log-dir\") pod \"calico-node-hxw57\" (UID: \"7be306fa-32e0-450f-8e15-d813f38b592f\") " pod="calico-system/calico-node-hxw57" Feb 9 09:59:05.804975 kubelet[2631]: I0209 09:59:05.804964 2631 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/7be306fa-32e0-450f-8e15-d813f38b592f-var-run-calico\") pod \"calico-node-hxw57\" (UID: \"7be306fa-32e0-450f-8e15-d813f38b592f\") " pod="calico-system/calico-node-hxw57" Feb 9 09:59:05.805064 kubelet[2631]: I0209 09:59:05.805054 2631 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d9mgw\" (UniqueName: \"kubernetes.io/projected/7be306fa-32e0-450f-8e15-d813f38b592f-kube-api-access-d9mgw\") pod \"calico-node-hxw57\" (UID: \"7be306fa-32e0-450f-8e15-d813f38b592f\") " pod="calico-system/calico-node-hxw57" Feb 9 09:59:05.805179 kubelet[2631]: I0209 09:59:05.805161 2631 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/7be306fa-32e0-450f-8e15-d813f38b592f-flexvol-driver-host\") pod \"calico-node-hxw57\" (UID: \"7be306fa-32e0-450f-8e15-d813f38b592f\") " pod="calico-system/calico-node-hxw57" Feb 9 09:59:05.805226 kubelet[2631]: I0209 09:59:05.805197 2631 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7be306fa-32e0-450f-8e15-d813f38b592f-tigera-ca-bundle\") pod \"calico-node-hxw57\" (UID: \"7be306fa-32e0-450f-8e15-d813f38b592f\") " pod="calico-system/calico-node-hxw57" Feb 9 09:59:05.805226 kubelet[2631]: I0209 09:59:05.805220 2631 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/7be306fa-32e0-450f-8e15-d813f38b592f-cni-bin-dir\") pod \"calico-node-hxw57\" (UID: \"7be306fa-32e0-450f-8e15-d813f38b592f\") " pod="calico-system/calico-node-hxw57" Feb 9 09:59:06.009842 env[1448]: time="2024-02-09T09:59:06.009145502Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-hxw57,Uid:7be306fa-32e0-450f-8e15-d813f38b592f,Namespace:calico-system,Attempt:0,}" Feb 9 09:59:06.050466 env[1448]: time="2024-02-09T09:59:06.050365692Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 09:59:06.050466 env[1448]: time="2024-02-09T09:59:06.050410412Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 09:59:06.050466 env[1448]: time="2024-02-09T09:59:06.050420812Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 09:59:06.050690 env[1448]: time="2024-02-09T09:59:06.050558853Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/316e47a97cc9b183f4e67e6e6a9a2103a59e72221e8e37f5be1c7a3a01e659b0 pid=3755 runtime=io.containerd.runc.v2 Feb 9 09:59:06.091573 env[1448]: time="2024-02-09T09:59:06.091467081Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-hxw57,Uid:7be306fa-32e0-450f-8e15-d813f38b592f,Namespace:calico-system,Attempt:0,} returns sandbox id \"316e47a97cc9b183f4e67e6e6a9a2103a59e72221e8e37f5be1c7a3a01e659b0\"" Feb 9 09:59:06.094538 env[1448]: time="2024-02-09T09:59:06.093903781Z" level=info msg="CreateContainer within sandbox \"316e47a97cc9b183f4e67e6e6a9a2103a59e72221e8e37f5be1c7a3a01e659b0\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Feb 9 09:59:06.136367 env[1448]: time="2024-02-09T09:59:06.136278781Z" level=info msg="CreateContainer within sandbox \"316e47a97cc9b183f4e67e6e6a9a2103a59e72221e8e37f5be1c7a3a01e659b0\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"2f9d1465f78d9f5745b8f63000af7ae234b0c0c0d166b04731e37bc9d336f5e0\"" Feb 9 09:59:06.138616 env[1448]: time="2024-02-09T09:59:06.138566200Z" level=info msg="StartContainer for \"2f9d1465f78d9f5745b8f63000af7ae234b0c0c0d166b04731e37bc9d336f5e0\"" Feb 9 09:59:06.190099 env[1448]: time="2024-02-09T09:59:06.189997436Z" level=info msg="StartContainer for \"2f9d1465f78d9f5745b8f63000af7ae234b0c0c0d166b04731e37bc9d336f5e0\" returns successfully" Feb 9 09:59:06.911429 systemd[1]: run-containerd-runc-k8s.io-316e47a97cc9b183f4e67e6e6a9a2103a59e72221e8e37f5be1c7a3a01e659b0-runc.tLdz7D.mount: Deactivated successfully. Feb 9 09:59:07.540518 kubelet[2631]: E0209 09:59:07.540478 2631 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-s2mw5" podUID=99a671b0-f7e8-4988-baf5-8e0d96bfea44 Feb 9 09:59:07.544252 kubelet[2631]: I0209 09:59:07.544219 2631 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=d1f0fc46-7e2b-4488-a6cc-bc3b70c5e3da path="/var/lib/kubelet/pods/d1f0fc46-7e2b-4488-a6cc-bc3b70c5e3da/volumes" Feb 9 09:59:07.574512 env[1448]: time="2024-02-09T09:59:07.574279533Z" level=info msg="shim disconnected" id=2f9d1465f78d9f5745b8f63000af7ae234b0c0c0d166b04731e37bc9d336f5e0 Feb 9 09:59:07.574512 env[1448]: time="2024-02-09T09:59:07.574346734Z" level=warning msg="cleaning up after shim disconnected" id=2f9d1465f78d9f5745b8f63000af7ae234b0c0c0d166b04731e37bc9d336f5e0 namespace=k8s.io Feb 9 09:59:07.574512 env[1448]: time="2024-02-09T09:59:07.574357134Z" level=info msg="cleaning up dead shim" Feb 9 09:59:07.581761 env[1448]: time="2024-02-09T09:59:07.581724556Z" level=warning msg="cleanup warnings time=\"2024-02-09T09:59:07Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3841 runtime=io.containerd.runc.v2\n" Feb 9 09:59:07.666073 env[1448]: time="2024-02-09T09:59:07.666031104Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.27.0\"" Feb 9 09:59:09.539691 kubelet[2631]: E0209 09:59:09.539648 2631 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-s2mw5" podUID=99a671b0-f7e8-4988-baf5-8e0d96bfea44 Feb 9 09:59:10.144125 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount700287790.mount: Deactivated successfully. Feb 9 09:59:10.748356 update_engine[1431]: I0209 09:59:10.748246 1431 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Feb 9 09:59:10.748709 update_engine[1431]: I0209 09:59:10.748461 1431 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Feb 9 09:59:10.748709 update_engine[1431]: I0209 09:59:10.748638 1431 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Feb 9 09:59:10.791437 update_engine[1431]: E0209 09:59:10.791391 1431 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Feb 9 09:59:10.791579 update_engine[1431]: I0209 09:59:10.791510 1431 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Feb 9 09:59:11.540803 kubelet[2631]: E0209 09:59:11.540776 2631 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-s2mw5" podUID=99a671b0-f7e8-4988-baf5-8e0d96bfea44 Feb 9 09:59:12.414412 env[1448]: time="2024-02-09T09:59:12.414363119Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/cni:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:59:12.420859 env[1448]: time="2024-02-09T09:59:12.420811171Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:9c9318f5fbf505fc3d84676966009a3887e58ea1e3eac10039e5a96dfceb254b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:59:12.424992 env[1448]: time="2024-02-09T09:59:12.424961645Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/cni:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:59:12.429382 env[1448]: time="2024-02-09T09:59:12.429349920Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/cni@sha256:d943b4c23e82a39b0186a1a3b2fe8f728e543d503df72d7be521501a82b7e7b4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:59:12.430079 env[1448]: time="2024-02-09T09:59:12.430012325Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.27.0\" returns image reference \"sha256:9c9318f5fbf505fc3d84676966009a3887e58ea1e3eac10039e5a96dfceb254b\"" Feb 9 09:59:12.433692 env[1448]: time="2024-02-09T09:59:12.433656154Z" level=info msg="CreateContainer within sandbox \"316e47a97cc9b183f4e67e6e6a9a2103a59e72221e8e37f5be1c7a3a01e659b0\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Feb 9 09:59:12.464645 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1544407358.mount: Deactivated successfully. Feb 9 09:59:12.482376 env[1448]: time="2024-02-09T09:59:12.482283945Z" level=info msg="CreateContainer within sandbox \"316e47a97cc9b183f4e67e6e6a9a2103a59e72221e8e37f5be1c7a3a01e659b0\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"75be706efc36439f2bdb9587171787d9adc2ca8c373ad417c1d41e1a061216f6\"" Feb 9 09:59:12.483858 env[1448]: time="2024-02-09T09:59:12.483153272Z" level=info msg="StartContainer for \"75be706efc36439f2bdb9587171787d9adc2ca8c373ad417c1d41e1a061216f6\"" Feb 9 09:59:12.506340 systemd[1]: run-containerd-runc-k8s.io-75be706efc36439f2bdb9587171787d9adc2ca8c373ad417c1d41e1a061216f6-runc.KNHjoF.mount: Deactivated successfully. Feb 9 09:59:12.547163 env[1448]: time="2024-02-09T09:59:12.544542685Z" level=info msg="StartContainer for \"75be706efc36439f2bdb9587171787d9adc2ca8c373ad417c1d41e1a061216f6\" returns successfully" Feb 9 09:59:13.539647 kubelet[2631]: E0209 09:59:13.539608 2631 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-s2mw5" podUID=99a671b0-f7e8-4988-baf5-8e0d96bfea44 Feb 9 09:59:15.248484 env[1448]: time="2024-02-09T09:59:15.248397429Z" level=info msg="StopPodSandbox for \"4716c7d9f1325f273cf3f9f437a405894f323375ca836f3ac49de65a7f439299\"" Feb 9 09:59:15.248958 env[1448]: time="2024-02-09T09:59:15.248488150Z" level=info msg="TearDown network for sandbox \"4716c7d9f1325f273cf3f9f437a405894f323375ca836f3ac49de65a7f439299\" successfully" Feb 9 09:59:15.248958 env[1448]: time="2024-02-09T09:59:15.248520390Z" level=info msg="StopPodSandbox for \"4716c7d9f1325f273cf3f9f437a405894f323375ca836f3ac49de65a7f439299\" returns successfully" Feb 9 09:59:15.250805 env[1448]: time="2024-02-09T09:59:15.249511518Z" level=info msg="RemovePodSandbox for \"4716c7d9f1325f273cf3f9f437a405894f323375ca836f3ac49de65a7f439299\"" Feb 9 09:59:15.250805 env[1448]: time="2024-02-09T09:59:15.249549798Z" level=info msg="Forcibly stopping sandbox \"4716c7d9f1325f273cf3f9f437a405894f323375ca836f3ac49de65a7f439299\"" Feb 9 09:59:15.250805 env[1448]: time="2024-02-09T09:59:15.249623399Z" level=info msg="TearDown network for sandbox \"4716c7d9f1325f273cf3f9f437a405894f323375ca836f3ac49de65a7f439299\" successfully" Feb 9 09:59:15.540000 kubelet[2631]: E0209 09:59:15.539962 2631 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-s2mw5" podUID=99a671b0-f7e8-4988-baf5-8e0d96bfea44 Feb 9 09:59:17.174847 env[1448]: time="2024-02-09T09:59:17.174783639Z" level=info msg="RemovePodSandbox \"4716c7d9f1325f273cf3f9f437a405894f323375ca836f3ac49de65a7f439299\" returns successfully" Feb 9 09:59:17.176739 env[1448]: time="2024-02-09T09:59:17.176502732Z" level=info msg="StopPodSandbox for \"9498948ab85deaf01e9691c3fb30c9aa7e2619d3ed0078c51cfae97c1f274924\"" Feb 9 09:59:17.176739 env[1448]: time="2024-02-09T09:59:17.176607133Z" level=info msg="TearDown network for sandbox \"9498948ab85deaf01e9691c3fb30c9aa7e2619d3ed0078c51cfae97c1f274924\" successfully" Feb 9 09:59:17.176739 env[1448]: time="2024-02-09T09:59:17.176657373Z" level=info msg="StopPodSandbox for \"9498948ab85deaf01e9691c3fb30c9aa7e2619d3ed0078c51cfae97c1f274924\" returns successfully" Feb 9 09:59:17.177760 env[1448]: time="2024-02-09T09:59:17.177700941Z" level=info msg="RemovePodSandbox for \"9498948ab85deaf01e9691c3fb30c9aa7e2619d3ed0078c51cfae97c1f274924\"" Feb 9 09:59:17.177840 env[1448]: time="2024-02-09T09:59:17.177754542Z" level=info msg="Forcibly stopping sandbox \"9498948ab85deaf01e9691c3fb30c9aa7e2619d3ed0078c51cfae97c1f274924\"" Feb 9 09:59:17.177869 env[1448]: time="2024-02-09T09:59:17.177836102Z" level=info msg="TearDown network for sandbox \"9498948ab85deaf01e9691c3fb30c9aa7e2619d3ed0078c51cfae97c1f274924\" successfully" Feb 9 09:59:17.337421 env[1448]: time="2024-02-09T09:59:17.337355453Z" level=info msg="RemovePodSandbox \"9498948ab85deaf01e9691c3fb30c9aa7e2619d3ed0078c51cfae97c1f274924\" returns successfully" Feb 9 09:59:17.539340 kubelet[2631]: E0209 09:59:17.539288 2631 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-s2mw5" podUID=99a671b0-f7e8-4988-baf5-8e0d96bfea44 Feb 9 09:59:19.539626 kubelet[2631]: E0209 09:59:19.539591 2631 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-s2mw5" podUID=99a671b0-f7e8-4988-baf5-8e0d96bfea44 Feb 9 09:59:20.167999 env[1448]: time="2024-02-09T09:59:20.167944888Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/calico-kubeconfig\": WRITE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 9 09:59:20.199910 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-75be706efc36439f2bdb9587171787d9adc2ca8c373ad417c1d41e1a061216f6-rootfs.mount: Deactivated successfully. Feb 9 09:59:20.251116 kubelet[2631]: I0209 09:59:20.251090 2631 kubelet_node_status.go:493] "Fast updating node status as it just became ready" Feb 9 09:59:20.284375 kubelet[2631]: I0209 09:59:20.284342 2631 topology_manager.go:210] "Topology Admit Handler" Feb 9 09:59:20.288209 kubelet[2631]: I0209 09:59:20.288165 2631 topology_manager.go:210] "Topology Admit Handler" Feb 9 09:59:20.288652 kubelet[2631]: I0209 09:59:20.288635 2631 topology_manager.go:210] "Topology Admit Handler" Feb 9 09:59:20.395698 kubelet[2631]: I0209 09:59:20.395663 2631 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6xmmk\" (UniqueName: \"kubernetes.io/projected/089a4bc9-3ee0-4ac6-b6dd-cc5465ed5f4b-kube-api-access-6xmmk\") pod \"calico-kube-controllers-66ffdb5668-zhk7s\" (UID: \"089a4bc9-3ee0-4ac6-b6dd-cc5465ed5f4b\") " pod="calico-system/calico-kube-controllers-66ffdb5668-zhk7s" Feb 9 09:59:20.395910 kubelet[2631]: I0209 09:59:20.395897 2631 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8z2j8\" (UniqueName: \"kubernetes.io/projected/ba7e5bc4-13aa-4925-8a28-9dfe806f0e3b-kube-api-access-8z2j8\") pod \"coredns-787d4945fb-z55w7\" (UID: \"ba7e5bc4-13aa-4925-8a28-9dfe806f0e3b\") " pod="kube-system/coredns-787d4945fb-z55w7" Feb 9 09:59:20.396017 kubelet[2631]: I0209 09:59:20.396006 2631 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-grgtb\" (UniqueName: \"kubernetes.io/projected/f6e0c31d-dc6d-4ba7-8de2-7860fda55d58-kube-api-access-grgtb\") pod \"coredns-787d4945fb-cd67q\" (UID: \"f6e0c31d-dc6d-4ba7-8de2-7860fda55d58\") " pod="kube-system/coredns-787d4945fb-cd67q" Feb 9 09:59:20.396106 kubelet[2631]: I0209 09:59:20.396096 2631 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/089a4bc9-3ee0-4ac6-b6dd-cc5465ed5f4b-tigera-ca-bundle\") pod \"calico-kube-controllers-66ffdb5668-zhk7s\" (UID: \"089a4bc9-3ee0-4ac6-b6dd-cc5465ed5f4b\") " pod="calico-system/calico-kube-controllers-66ffdb5668-zhk7s" Feb 9 09:59:20.396201 kubelet[2631]: I0209 09:59:20.396190 2631 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f6e0c31d-dc6d-4ba7-8de2-7860fda55d58-config-volume\") pod \"coredns-787d4945fb-cd67q\" (UID: \"f6e0c31d-dc6d-4ba7-8de2-7860fda55d58\") " pod="kube-system/coredns-787d4945fb-cd67q" Feb 9 09:59:20.396286 kubelet[2631]: I0209 09:59:20.396277 2631 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ba7e5bc4-13aa-4925-8a28-9dfe806f0e3b-config-volume\") pod \"coredns-787d4945fb-z55w7\" (UID: \"ba7e5bc4-13aa-4925-8a28-9dfe806f0e3b\") " pod="kube-system/coredns-787d4945fb-z55w7" Feb 9 09:59:20.588648 env[1448]: time="2024-02-09T09:59:20.588224299Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-787d4945fb-cd67q,Uid:f6e0c31d-dc6d-4ba7-8de2-7860fda55d58,Namespace:kube-system,Attempt:0,}" Feb 9 09:59:20.591891 env[1448]: time="2024-02-09T09:59:20.591634405Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-66ffdb5668-zhk7s,Uid:089a4bc9-3ee0-4ac6-b6dd-cc5465ed5f4b,Namespace:calico-system,Attempt:0,}" Feb 9 09:59:20.599259 env[1448]: time="2024-02-09T09:59:20.599001901Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-787d4945fb-z55w7,Uid:ba7e5bc4-13aa-4925-8a28-9dfe806f0e3b,Namespace:kube-system,Attempt:0,}" Feb 9 09:59:20.748387 update_engine[1431]: I0209 09:59:20.748337 1431 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Feb 9 09:59:20.748731 update_engine[1431]: I0209 09:59:20.748526 1431 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Feb 9 09:59:20.748731 update_engine[1431]: I0209 09:59:20.748688 1431 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Feb 9 09:59:20.813515 update_engine[1431]: E0209 09:59:20.813469 1431 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Feb 9 09:59:20.813657 update_engine[1431]: I0209 09:59:20.813587 1431 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Feb 9 09:59:21.542497 env[1448]: time="2024-02-09T09:59:21.542084949Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-s2mw5,Uid:99a671b0-f7e8-4988-baf5-8e0d96bfea44,Namespace:calico-system,Attempt:0,}" Feb 9 09:59:28.166761 env[1448]: time="2024-02-09T09:59:28.166024603Z" level=error msg="collecting metrics for 75be706efc36439f2bdb9587171787d9adc2ca8c373ad417c1d41e1a061216f6" error="cgroups: cgroup deleted: unknown" Feb 9 09:59:28.168060 env[1448]: time="2024-02-09T09:59:28.168025177Z" level=info msg="shim disconnected" id=75be706efc36439f2bdb9587171787d9adc2ca8c373ad417c1d41e1a061216f6 Feb 9 09:59:28.168220 env[1448]: time="2024-02-09T09:59:28.168200739Z" level=warning msg="cleaning up after shim disconnected" id=75be706efc36439f2bdb9587171787d9adc2ca8c373ad417c1d41e1a061216f6 namespace=k8s.io Feb 9 09:59:28.168297 env[1448]: time="2024-02-09T09:59:28.168283939Z" level=info msg="cleaning up dead shim" Feb 9 09:59:28.186496 env[1448]: time="2024-02-09T09:59:28.186459069Z" level=warning msg="cleanup warnings time=\"2024-02-09T09:59:28Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3915 runtime=io.containerd.runc.v2\n" Feb 9 09:59:28.700168 env[1448]: time="2024-02-09T09:59:28.698597062Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.27.0\"" Feb 9 09:59:29.106924 env[1448]: time="2024-02-09T09:59:29.106860905Z" level=error msg="Failed to destroy network for sandbox \"f952357642f85bdc2d65969e3eb0888ccda29b1685a587266a7028b8e593a917\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 9 09:59:29.108894 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-f952357642f85bdc2d65969e3eb0888ccda29b1685a587266a7028b8e593a917-shm.mount: Deactivated successfully. Feb 9 09:59:29.109844 env[1448]: time="2024-02-09T09:59:29.109652165Z" level=error msg="encountered an error cleaning up failed sandbox \"f952357642f85bdc2d65969e3eb0888ccda29b1685a587266a7028b8e593a917\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 9 09:59:29.109844 env[1448]: time="2024-02-09T09:59:29.109708285Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-66ffdb5668-zhk7s,Uid:089a4bc9-3ee0-4ac6-b6dd-cc5465ed5f4b,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"f952357642f85bdc2d65969e3eb0888ccda29b1685a587266a7028b8e593a917\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 9 09:59:29.110155 kubelet[2631]: E0209 09:59:29.110125 2631 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f952357642f85bdc2d65969e3eb0888ccda29b1685a587266a7028b8e593a917\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 9 09:59:29.110443 kubelet[2631]: E0209 09:59:29.110191 2631 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f952357642f85bdc2d65969e3eb0888ccda29b1685a587266a7028b8e593a917\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-66ffdb5668-zhk7s" Feb 9 09:59:29.110443 kubelet[2631]: E0209 09:59:29.110213 2631 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f952357642f85bdc2d65969e3eb0888ccda29b1685a587266a7028b8e593a917\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-66ffdb5668-zhk7s" Feb 9 09:59:29.110443 kubelet[2631]: E0209 09:59:29.110276 2631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-66ffdb5668-zhk7s_calico-system(089a4bc9-3ee0-4ac6-b6dd-cc5465ed5f4b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-66ffdb5668-zhk7s_calico-system(089a4bc9-3ee0-4ac6-b6dd-cc5465ed5f4b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f952357642f85bdc2d65969e3eb0888ccda29b1685a587266a7028b8e593a917\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-66ffdb5668-zhk7s" podUID=089a4bc9-3ee0-4ac6-b6dd-cc5465ed5f4b Feb 9 09:59:29.151981 env[1448]: time="2024-02-09T09:59:29.151923386Z" level=error msg="Failed to destroy network for sandbox \"f2085bca38ad5e1d2c534ce6eafaaca4bdc2cd8a2819f432643b3908fbc05dbc\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 9 09:59:29.152364 env[1448]: time="2024-02-09T09:59:29.152328669Z" level=error msg="encountered an error cleaning up failed sandbox \"f2085bca38ad5e1d2c534ce6eafaaca4bdc2cd8a2819f432643b3908fbc05dbc\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 9 09:59:29.152419 env[1448]: time="2024-02-09T09:59:29.152381069Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-s2mw5,Uid:99a671b0-f7e8-4988-baf5-8e0d96bfea44,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"f2085bca38ad5e1d2c534ce6eafaaca4bdc2cd8a2819f432643b3908fbc05dbc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 9 09:59:29.152660 kubelet[2631]: E0209 09:59:29.152626 2631 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f2085bca38ad5e1d2c534ce6eafaaca4bdc2cd8a2819f432643b3908fbc05dbc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 9 09:59:29.152747 kubelet[2631]: E0209 09:59:29.152683 2631 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f2085bca38ad5e1d2c534ce6eafaaca4bdc2cd8a2819f432643b3908fbc05dbc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-s2mw5" Feb 9 09:59:29.152747 kubelet[2631]: E0209 09:59:29.152704 2631 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f2085bca38ad5e1d2c534ce6eafaaca4bdc2cd8a2819f432643b3908fbc05dbc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-s2mw5" Feb 9 09:59:29.152807 kubelet[2631]: E0209 09:59:29.152757 2631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-s2mw5_calico-system(99a671b0-f7e8-4988-baf5-8e0d96bfea44)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-s2mw5_calico-system(99a671b0-f7e8-4988-baf5-8e0d96bfea44)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f2085bca38ad5e1d2c534ce6eafaaca4bdc2cd8a2819f432643b3908fbc05dbc\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-s2mw5" podUID=99a671b0-f7e8-4988-baf5-8e0d96bfea44 Feb 9 09:59:29.313000 env[1448]: time="2024-02-09T09:59:29.312926654Z" level=error msg="Failed to destroy network for sandbox \"6de8df183dc36f2008f8e0c428cedb49e0617045e5090d041fd1907e127148e6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 9 09:59:29.313645 env[1448]: time="2024-02-09T09:59:29.313614139Z" level=error msg="encountered an error cleaning up failed sandbox \"6de8df183dc36f2008f8e0c428cedb49e0617045e5090d041fd1907e127148e6\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 9 09:59:29.313764 env[1448]: time="2024-02-09T09:59:29.313738300Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-787d4945fb-cd67q,Uid:f6e0c31d-dc6d-4ba7-8de2-7860fda55d58,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"6de8df183dc36f2008f8e0c428cedb49e0617045e5090d041fd1907e127148e6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 9 09:59:29.314087 kubelet[2631]: E0209 09:59:29.314027 2631 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6de8df183dc36f2008f8e0c428cedb49e0617045e5090d041fd1907e127148e6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 9 09:59:29.314087 kubelet[2631]: E0209 09:59:29.314080 2631 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6de8df183dc36f2008f8e0c428cedb49e0617045e5090d041fd1907e127148e6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-787d4945fb-cd67q" Feb 9 09:59:29.314209 kubelet[2631]: E0209 09:59:29.314100 2631 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6de8df183dc36f2008f8e0c428cedb49e0617045e5090d041fd1907e127148e6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-787d4945fb-cd67q" Feb 9 09:59:29.314209 kubelet[2631]: E0209 09:59:29.314182 2631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-787d4945fb-cd67q_kube-system(f6e0c31d-dc6d-4ba7-8de2-7860fda55d58)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-787d4945fb-cd67q_kube-system(f6e0c31d-dc6d-4ba7-8de2-7860fda55d58)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6de8df183dc36f2008f8e0c428cedb49e0617045e5090d041fd1907e127148e6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-787d4945fb-cd67q" podUID=f6e0c31d-dc6d-4ba7-8de2-7860fda55d58 Feb 9 09:59:29.404737 env[1448]: time="2024-02-09T09:59:29.404620108Z" level=error msg="Failed to destroy network for sandbox \"c1fe3348e28dbac7ff2cee1d7c38896695ba2af371345c60136ab6c6b466806f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 9 09:59:29.405211 env[1448]: time="2024-02-09T09:59:29.405177632Z" level=error msg="encountered an error cleaning up failed sandbox \"c1fe3348e28dbac7ff2cee1d7c38896695ba2af371345c60136ab6c6b466806f\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 9 09:59:29.405356 env[1448]: time="2024-02-09T09:59:29.405328033Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-787d4945fb-z55w7,Uid:ba7e5bc4-13aa-4925-8a28-9dfe806f0e3b,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"c1fe3348e28dbac7ff2cee1d7c38896695ba2af371345c60136ab6c6b466806f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 9 09:59:29.405874 kubelet[2631]: E0209 09:59:29.405842 2631 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c1fe3348e28dbac7ff2cee1d7c38896695ba2af371345c60136ab6c6b466806f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 9 09:59:29.405953 kubelet[2631]: E0209 09:59:29.405905 2631 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c1fe3348e28dbac7ff2cee1d7c38896695ba2af371345c60136ab6c6b466806f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-787d4945fb-z55w7" Feb 9 09:59:29.405953 kubelet[2631]: E0209 09:59:29.405928 2631 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c1fe3348e28dbac7ff2cee1d7c38896695ba2af371345c60136ab6c6b466806f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-787d4945fb-z55w7" Feb 9 09:59:29.406017 kubelet[2631]: E0209 09:59:29.405976 2631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-787d4945fb-z55w7_kube-system(ba7e5bc4-13aa-4925-8a28-9dfe806f0e3b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-787d4945fb-z55w7_kube-system(ba7e5bc4-13aa-4925-8a28-9dfe806f0e3b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c1fe3348e28dbac7ff2cee1d7c38896695ba2af371345c60136ab6c6b466806f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-787d4945fb-z55w7" podUID=ba7e5bc4-13aa-4925-8a28-9dfe806f0e3b Feb 9 09:59:29.699667 kubelet[2631]: I0209 09:59:29.699563 2631 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c1fe3348e28dbac7ff2cee1d7c38896695ba2af371345c60136ab6c6b466806f" Feb 9 09:59:29.701707 kubelet[2631]: I0209 09:59:29.701424 2631 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f952357642f85bdc2d65969e3eb0888ccda29b1685a587266a7028b8e593a917" Feb 9 09:59:29.701987 env[1448]: time="2024-02-09T09:59:29.701946348Z" level=info msg="StopPodSandbox for \"f952357642f85bdc2d65969e3eb0888ccda29b1685a587266a7028b8e593a917\"" Feb 9 09:59:29.703134 env[1448]: time="2024-02-09T09:59:29.701980188Z" level=info msg="StopPodSandbox for \"c1fe3348e28dbac7ff2cee1d7c38896695ba2af371345c60136ab6c6b466806f\"" Feb 9 09:59:29.704901 kubelet[2631]: I0209 09:59:29.704832 2631 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f2085bca38ad5e1d2c534ce6eafaaca4bdc2cd8a2819f432643b3908fbc05dbc" Feb 9 09:59:29.710031 env[1448]: time="2024-02-09T09:59:29.709839444Z" level=info msg="StopPodSandbox for \"f2085bca38ad5e1d2c534ce6eafaaca4bdc2cd8a2819f432643b3908fbc05dbc\"" Feb 9 09:59:29.711867 kubelet[2631]: I0209 09:59:29.711838 2631 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6de8df183dc36f2008f8e0c428cedb49e0617045e5090d041fd1907e127148e6" Feb 9 09:59:29.712498 env[1448]: time="2024-02-09T09:59:29.712456103Z" level=info msg="StopPodSandbox for \"6de8df183dc36f2008f8e0c428cedb49e0617045e5090d041fd1907e127148e6\"" Feb 9 09:59:29.749177 env[1448]: time="2024-02-09T09:59:29.749122844Z" level=error msg="StopPodSandbox for \"c1fe3348e28dbac7ff2cee1d7c38896695ba2af371345c60136ab6c6b466806f\" failed" error="failed to destroy network for sandbox \"c1fe3348e28dbac7ff2cee1d7c38896695ba2af371345c60136ab6c6b466806f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 9 09:59:29.749694 kubelet[2631]: E0209 09:59:29.749548 2631 remote_runtime.go:205] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"c1fe3348e28dbac7ff2cee1d7c38896695ba2af371345c60136ab6c6b466806f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="c1fe3348e28dbac7ff2cee1d7c38896695ba2af371345c60136ab6c6b466806f" Feb 9 09:59:29.749694 kubelet[2631]: E0209 09:59:29.749591 2631 kuberuntime_manager.go:965] "Failed to stop sandbox" podSandboxID={Type:containerd ID:c1fe3348e28dbac7ff2cee1d7c38896695ba2af371345c60136ab6c6b466806f} Feb 9 09:59:29.749694 kubelet[2631]: E0209 09:59:29.749624 2631 kuberuntime_manager.go:705] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"ba7e5bc4-13aa-4925-8a28-9dfe806f0e3b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c1fe3348e28dbac7ff2cee1d7c38896695ba2af371345c60136ab6c6b466806f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 9 09:59:29.749694 kubelet[2631]: E0209 09:59:29.749654 2631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"ba7e5bc4-13aa-4925-8a28-9dfe806f0e3b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c1fe3348e28dbac7ff2cee1d7c38896695ba2af371345c60136ab6c6b466806f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-787d4945fb-z55w7" podUID=ba7e5bc4-13aa-4925-8a28-9dfe806f0e3b Feb 9 09:59:29.772028 env[1448]: time="2024-02-09T09:59:29.771937047Z" level=error msg="StopPodSandbox for \"f952357642f85bdc2d65969e3eb0888ccda29b1685a587266a7028b8e593a917\" failed" error="failed to destroy network for sandbox \"f952357642f85bdc2d65969e3eb0888ccda29b1685a587266a7028b8e593a917\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 9 09:59:29.772827 kubelet[2631]: E0209 09:59:29.772797 2631 remote_runtime.go:205] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"f952357642f85bdc2d65969e3eb0888ccda29b1685a587266a7028b8e593a917\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="f952357642f85bdc2d65969e3eb0888ccda29b1685a587266a7028b8e593a917" Feb 9 09:59:29.772978 kubelet[2631]: E0209 09:59:29.772842 2631 kuberuntime_manager.go:965] "Failed to stop sandbox" podSandboxID={Type:containerd ID:f952357642f85bdc2d65969e3eb0888ccda29b1685a587266a7028b8e593a917} Feb 9 09:59:29.772978 kubelet[2631]: E0209 09:59:29.772879 2631 kuberuntime_manager.go:705] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"089a4bc9-3ee0-4ac6-b6dd-cc5465ed5f4b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f952357642f85bdc2d65969e3eb0888ccda29b1685a587266a7028b8e593a917\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 9 09:59:29.772978 kubelet[2631]: E0209 09:59:29.772906 2631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"089a4bc9-3ee0-4ac6-b6dd-cc5465ed5f4b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f952357642f85bdc2d65969e3eb0888ccda29b1685a587266a7028b8e593a917\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-66ffdb5668-zhk7s" podUID=089a4bc9-3ee0-4ac6-b6dd-cc5465ed5f4b Feb 9 09:59:29.773230 env[1448]: time="2024-02-09T09:59:29.773195416Z" level=error msg="StopPodSandbox for \"f2085bca38ad5e1d2c534ce6eafaaca4bdc2cd8a2819f432643b3908fbc05dbc\" failed" error="failed to destroy network for sandbox \"f2085bca38ad5e1d2c534ce6eafaaca4bdc2cd8a2819f432643b3908fbc05dbc\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 9 09:59:29.775889 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-f2085bca38ad5e1d2c534ce6eafaaca4bdc2cd8a2819f432643b3908fbc05dbc-shm.mount: Deactivated successfully. Feb 9 09:59:29.777331 kubelet[2631]: E0209 09:59:29.777178 2631 remote_runtime.go:205] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"f2085bca38ad5e1d2c534ce6eafaaca4bdc2cd8a2819f432643b3908fbc05dbc\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="f2085bca38ad5e1d2c534ce6eafaaca4bdc2cd8a2819f432643b3908fbc05dbc" Feb 9 09:59:29.777331 kubelet[2631]: E0209 09:59:29.777225 2631 kuberuntime_manager.go:965] "Failed to stop sandbox" podSandboxID={Type:containerd ID:f2085bca38ad5e1d2c534ce6eafaaca4bdc2cd8a2819f432643b3908fbc05dbc} Feb 9 09:59:29.777331 kubelet[2631]: E0209 09:59:29.777261 2631 kuberuntime_manager.go:705] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"99a671b0-f7e8-4988-baf5-8e0d96bfea44\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f2085bca38ad5e1d2c534ce6eafaaca4bdc2cd8a2819f432643b3908fbc05dbc\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 9 09:59:29.777331 kubelet[2631]: E0209 09:59:29.777307 2631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"99a671b0-f7e8-4988-baf5-8e0d96bfea44\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f2085bca38ad5e1d2c534ce6eafaaca4bdc2cd8a2819f432643b3908fbc05dbc\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-s2mw5" podUID=99a671b0-f7e8-4988-baf5-8e0d96bfea44 Feb 9 09:59:29.777673 env[1448]: time="2024-02-09T09:59:29.777636647Z" level=error msg="StopPodSandbox for \"6de8df183dc36f2008f8e0c428cedb49e0617045e5090d041fd1907e127148e6\" failed" error="failed to destroy network for sandbox \"6de8df183dc36f2008f8e0c428cedb49e0617045e5090d041fd1907e127148e6\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 9 09:59:29.777909 kubelet[2631]: E0209 09:59:29.777879 2631 remote_runtime.go:205] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"6de8df183dc36f2008f8e0c428cedb49e0617045e5090d041fd1907e127148e6\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="6de8df183dc36f2008f8e0c428cedb49e0617045e5090d041fd1907e127148e6" Feb 9 09:59:29.777909 kubelet[2631]: E0209 09:59:29.777912 2631 kuberuntime_manager.go:965] "Failed to stop sandbox" podSandboxID={Type:containerd ID:6de8df183dc36f2008f8e0c428cedb49e0617045e5090d041fd1907e127148e6} Feb 9 09:59:29.778028 kubelet[2631]: E0209 09:59:29.777944 2631 kuberuntime_manager.go:705] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"f6e0c31d-dc6d-4ba7-8de2-7860fda55d58\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6de8df183dc36f2008f8e0c428cedb49e0617045e5090d041fd1907e127148e6\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 9 09:59:29.778028 kubelet[2631]: E0209 09:59:29.777967 2631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"f6e0c31d-dc6d-4ba7-8de2-7860fda55d58\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6de8df183dc36f2008f8e0c428cedb49e0617045e5090d041fd1907e127148e6\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-787d4945fb-cd67q" podUID=f6e0c31d-dc6d-4ba7-8de2-7860fda55d58 Feb 9 09:59:30.748715 update_engine[1431]: I0209 09:59:30.748337 1431 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Feb 9 09:59:30.748715 update_engine[1431]: I0209 09:59:30.748505 1431 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Feb 9 09:59:30.748715 update_engine[1431]: I0209 09:59:30.748670 1431 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Feb 9 09:59:30.857097 update_engine[1431]: E0209 09:59:30.856449 1431 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Feb 9 09:59:30.857097 update_engine[1431]: I0209 09:59:30.856585 1431 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Feb 9 09:59:30.857097 update_engine[1431]: I0209 09:59:30.856593 1431 omaha_request_action.cc:621] Omaha request response: Feb 9 09:59:30.857097 update_engine[1431]: E0209 09:59:30.856696 1431 omaha_request_action.cc:640] Omaha request network transfer failed. Feb 9 09:59:30.857097 update_engine[1431]: I0209 09:59:30.856709 1431 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Feb 9 09:59:30.857097 update_engine[1431]: I0209 09:59:30.856712 1431 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Feb 9 09:59:30.857097 update_engine[1431]: I0209 09:59:30.856715 1431 update_attempter.cc:306] Processing Done. Feb 9 09:59:30.857097 update_engine[1431]: E0209 09:59:30.856729 1431 update_attempter.cc:619] Update failed. Feb 9 09:59:30.857097 update_engine[1431]: I0209 09:59:30.856731 1431 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Feb 9 09:59:30.857097 update_engine[1431]: I0209 09:59:30.856734 1431 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Feb 9 09:59:30.857097 update_engine[1431]: I0209 09:59:30.856737 1431 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Feb 9 09:59:30.857097 update_engine[1431]: I0209 09:59:30.856796 1431 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Feb 9 09:59:30.857097 update_engine[1431]: I0209 09:59:30.856814 1431 omaha_request_action.cc:270] Posting an Omaha request to disabled Feb 9 09:59:30.857097 update_engine[1431]: I0209 09:59:30.856817 1431 omaha_request_action.cc:271] Request: Feb 9 09:59:30.857097 update_engine[1431]: Feb 9 09:59:30.857097 update_engine[1431]: Feb 9 09:59:30.857097 update_engine[1431]: Feb 9 09:59:30.857611 update_engine[1431]: Feb 9 09:59:30.857611 update_engine[1431]: Feb 9 09:59:30.857611 update_engine[1431]: Feb 9 09:59:30.857611 update_engine[1431]: I0209 09:59:30.856822 1431 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Feb 9 09:59:30.857611 update_engine[1431]: I0209 09:59:30.856928 1431 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Feb 9 09:59:30.857611 update_engine[1431]: I0209 09:59:30.857067 1431 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Feb 9 09:59:30.857731 locksmithd[1502]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Feb 9 09:59:30.912113 update_engine[1431]: E0209 09:59:30.911890 1431 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Feb 9 09:59:30.912113 update_engine[1431]: I0209 09:59:30.911991 1431 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Feb 9 09:59:30.912113 update_engine[1431]: I0209 09:59:30.911996 1431 omaha_request_action.cc:621] Omaha request response: Feb 9 09:59:30.912113 update_engine[1431]: I0209 09:59:30.912001 1431 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Feb 9 09:59:30.912113 update_engine[1431]: I0209 09:59:30.912004 1431 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Feb 9 09:59:30.912113 update_engine[1431]: I0209 09:59:30.912007 1431 update_attempter.cc:306] Processing Done. Feb 9 09:59:30.912113 update_engine[1431]: I0209 09:59:30.912015 1431 update_attempter.cc:310] Error event sent. Feb 9 09:59:30.912113 update_engine[1431]: I0209 09:59:30.912024 1431 update_check_scheduler.cc:74] Next update check in 44m48s Feb 9 09:59:30.912506 locksmithd[1502]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Feb 9 09:59:39.558792 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1668873367.mount: Deactivated successfully. Feb 9 09:59:41.541150 env[1448]: time="2024-02-09T09:59:41.541103439Z" level=info msg="StopPodSandbox for \"c1fe3348e28dbac7ff2cee1d7c38896695ba2af371345c60136ab6c6b466806f\"" Feb 9 09:59:41.542189 env[1448]: time="2024-02-09T09:59:41.541497882Z" level=info msg="StopPodSandbox for \"6de8df183dc36f2008f8e0c428cedb49e0617045e5090d041fd1907e127148e6\"" Feb 9 09:59:41.571385 env[1448]: time="2024-02-09T09:59:41.571324083Z" level=error msg="StopPodSandbox for \"c1fe3348e28dbac7ff2cee1d7c38896695ba2af371345c60136ab6c6b466806f\" failed" error="failed to destroy network for sandbox \"c1fe3348e28dbac7ff2cee1d7c38896695ba2af371345c60136ab6c6b466806f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 9 09:59:41.571755 kubelet[2631]: E0209 09:59:41.571724 2631 remote_runtime.go:205] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"c1fe3348e28dbac7ff2cee1d7c38896695ba2af371345c60136ab6c6b466806f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="c1fe3348e28dbac7ff2cee1d7c38896695ba2af371345c60136ab6c6b466806f" Feb 9 09:59:41.572064 kubelet[2631]: E0209 09:59:41.571764 2631 kuberuntime_manager.go:965] "Failed to stop sandbox" podSandboxID={Type:containerd ID:c1fe3348e28dbac7ff2cee1d7c38896695ba2af371345c60136ab6c6b466806f} Feb 9 09:59:41.572064 kubelet[2631]: E0209 09:59:41.571809 2631 kuberuntime_manager.go:705] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"ba7e5bc4-13aa-4925-8a28-9dfe806f0e3b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c1fe3348e28dbac7ff2cee1d7c38896695ba2af371345c60136ab6c6b466806f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 9 09:59:41.572064 kubelet[2631]: E0209 09:59:41.571836 2631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"ba7e5bc4-13aa-4925-8a28-9dfe806f0e3b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c1fe3348e28dbac7ff2cee1d7c38896695ba2af371345c60136ab6c6b466806f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-787d4945fb-z55w7" podUID=ba7e5bc4-13aa-4925-8a28-9dfe806f0e3b Feb 9 09:59:41.573931 env[1448]: time="2024-02-09T09:59:41.573883420Z" level=error msg="StopPodSandbox for \"6de8df183dc36f2008f8e0c428cedb49e0617045e5090d041fd1907e127148e6\" failed" error="failed to destroy network for sandbox \"6de8df183dc36f2008f8e0c428cedb49e0617045e5090d041fd1907e127148e6\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 9 09:59:41.574099 kubelet[2631]: E0209 09:59:41.574076 2631 remote_runtime.go:205] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"6de8df183dc36f2008f8e0c428cedb49e0617045e5090d041fd1907e127148e6\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="6de8df183dc36f2008f8e0c428cedb49e0617045e5090d041fd1907e127148e6" Feb 9 09:59:41.574099 kubelet[2631]: E0209 09:59:41.574108 2631 kuberuntime_manager.go:965] "Failed to stop sandbox" podSandboxID={Type:containerd ID:6de8df183dc36f2008f8e0c428cedb49e0617045e5090d041fd1907e127148e6} Feb 9 09:59:41.574184 kubelet[2631]: E0209 09:59:41.574137 2631 kuberuntime_manager.go:705] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"f6e0c31d-dc6d-4ba7-8de2-7860fda55d58\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6de8df183dc36f2008f8e0c428cedb49e0617045e5090d041fd1907e127148e6\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 9 09:59:41.574184 kubelet[2631]: E0209 09:59:41.574170 2631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"f6e0c31d-dc6d-4ba7-8de2-7860fda55d58\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6de8df183dc36f2008f8e0c428cedb49e0617045e5090d041fd1907e127148e6\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-787d4945fb-cd67q" podUID=f6e0c31d-dc6d-4ba7-8de2-7860fda55d58 Feb 9 09:59:42.540993 env[1448]: time="2024-02-09T09:59:42.540124669Z" level=info msg="StopPodSandbox for \"f952357642f85bdc2d65969e3eb0888ccda29b1685a587266a7028b8e593a917\"" Feb 9 09:59:42.561459 env[1448]: time="2024-02-09T09:59:42.561386572Z" level=error msg="StopPodSandbox for \"f952357642f85bdc2d65969e3eb0888ccda29b1685a587266a7028b8e593a917\" failed" error="failed to destroy network for sandbox \"f952357642f85bdc2d65969e3eb0888ccda29b1685a587266a7028b8e593a917\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 9 09:59:42.561786 kubelet[2631]: E0209 09:59:42.561639 2631 remote_runtime.go:205] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"f952357642f85bdc2d65969e3eb0888ccda29b1685a587266a7028b8e593a917\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="f952357642f85bdc2d65969e3eb0888ccda29b1685a587266a7028b8e593a917" Feb 9 09:59:42.561786 kubelet[2631]: E0209 09:59:42.561674 2631 kuberuntime_manager.go:965] "Failed to stop sandbox" podSandboxID={Type:containerd ID:f952357642f85bdc2d65969e3eb0888ccda29b1685a587266a7028b8e593a917} Feb 9 09:59:42.561786 kubelet[2631]: E0209 09:59:42.561726 2631 kuberuntime_manager.go:705] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"089a4bc9-3ee0-4ac6-b6dd-cc5465ed5f4b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f952357642f85bdc2d65969e3eb0888ccda29b1685a587266a7028b8e593a917\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 9 09:59:42.561786 kubelet[2631]: E0209 09:59:42.561761 2631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"089a4bc9-3ee0-4ac6-b6dd-cc5465ed5f4b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f952357642f85bdc2d65969e3eb0888ccda29b1685a587266a7028b8e593a917\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-66ffdb5668-zhk7s" podUID=089a4bc9-3ee0-4ac6-b6dd-cc5465ed5f4b Feb 9 09:59:45.541616 env[1448]: time="2024-02-09T09:59:45.540793264Z" level=info msg="StopPodSandbox for \"f2085bca38ad5e1d2c534ce6eafaaca4bdc2cd8a2819f432643b3908fbc05dbc\"" Feb 9 09:59:45.564050 env[1448]: time="2024-02-09T09:59:45.563989205Z" level=error msg="StopPodSandbox for \"f2085bca38ad5e1d2c534ce6eafaaca4bdc2cd8a2819f432643b3908fbc05dbc\" failed" error="failed to destroy network for sandbox \"f2085bca38ad5e1d2c534ce6eafaaca4bdc2cd8a2819f432643b3908fbc05dbc\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 9 09:59:45.564475 kubelet[2631]: E0209 09:59:45.564328 2631 remote_runtime.go:205] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"f2085bca38ad5e1d2c534ce6eafaaca4bdc2cd8a2819f432643b3908fbc05dbc\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="f2085bca38ad5e1d2c534ce6eafaaca4bdc2cd8a2819f432643b3908fbc05dbc" Feb 9 09:59:45.564475 kubelet[2631]: E0209 09:59:45.564368 2631 kuberuntime_manager.go:965] "Failed to stop sandbox" podSandboxID={Type:containerd ID:f2085bca38ad5e1d2c534ce6eafaaca4bdc2cd8a2819f432643b3908fbc05dbc} Feb 9 09:59:45.564475 kubelet[2631]: E0209 09:59:45.564406 2631 kuberuntime_manager.go:705] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"99a671b0-f7e8-4988-baf5-8e0d96bfea44\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f2085bca38ad5e1d2c534ce6eafaaca4bdc2cd8a2819f432643b3908fbc05dbc\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 9 09:59:45.564475 kubelet[2631]: E0209 09:59:45.564432 2631 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"99a671b0-f7e8-4988-baf5-8e0d96bfea44\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f2085bca38ad5e1d2c534ce6eafaaca4bdc2cd8a2819f432643b3908fbc05dbc\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-s2mw5" podUID=99a671b0-f7e8-4988-baf5-8e0d96bfea44 Feb 9 09:59:50.824264 env[1448]: time="2024-02-09T09:59:50.823667003Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:59:50.830385 env[1448]: time="2024-02-09T09:59:50.830329078Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:c445639cb28807ced09724016dc3b273b170b14d3b3d0c39b1affa1cc6b68774,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:59:50.833633 env[1448]: time="2024-02-09T09:59:50.833589790Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/node:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:59:50.837800 env[1448]: time="2024-02-09T09:59:50.837754017Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node@sha256:a45dffb21a0e9ca8962f36359a2ab776beeecd93843543c2fa1745d7bbb0f754,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:59:50.838560 env[1448]: time="2024-02-09T09:59:50.838529161Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.27.0\" returns image reference \"sha256:c445639cb28807ced09724016dc3b273b170b14d3b3d0c39b1affa1cc6b68774\"" Feb 9 09:59:50.854866 env[1448]: time="2024-02-09T09:59:50.854517773Z" level=info msg="CreateContainer within sandbox \"316e47a97cc9b183f4e67e6e6a9a2103a59e72221e8e37f5be1c7a3a01e659b0\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Feb 9 09:59:50.896717 env[1448]: time="2024-02-09T09:59:50.896663643Z" level=info msg="CreateContainer within sandbox \"316e47a97cc9b183f4e67e6e6a9a2103a59e72221e8e37f5be1c7a3a01e659b0\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"91d63ce072a914b20a5508b3e18aa2e28df7768cd29968d1054fb74323b1398c\"" Feb 9 09:59:50.898952 env[1448]: time="2024-02-09T09:59:50.897463670Z" level=info msg="StartContainer for \"91d63ce072a914b20a5508b3e18aa2e28df7768cd29968d1054fb74323b1398c\"" Feb 9 09:59:51.852052 env[1448]: time="2024-02-09T09:59:51.852010351Z" level=info msg="StartContainer for \"91d63ce072a914b20a5508b3e18aa2e28df7768cd29968d1054fb74323b1398c\" returns successfully" Feb 9 09:59:51.895799 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Feb 9 09:59:51.895971 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Feb 9 09:59:52.905993 systemd[1]: run-containerd-runc-k8s.io-91d63ce072a914b20a5508b3e18aa2e28df7768cd29968d1054fb74323b1398c-runc.igrJMI.mount: Deactivated successfully. Feb 9 09:59:53.260000 audit[4343]: AVC avc: denied { write } for pid=4343 comm="tee" name="fd" dev="proc" ino=28993 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Feb 9 09:59:53.266820 kernel: kauditd_printk_skb: 8 callbacks suppressed Feb 9 09:59:53.266945 kernel: audit: type=1400 audit(1707472793.260:296): avc: denied { write } for pid=4343 comm="tee" name="fd" dev="proc" ino=28993 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Feb 9 09:59:53.260000 audit[4343]: SYSCALL arch=c00000b7 syscall=56 success=yes exit=3 a0=ffffffffffffff9c a1=ffffd016997a a2=241 a3=1b6 items=1 ppid=4301 pid=4343 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:59:53.343189 kernel: audit: type=1300 audit(1707472793.260:296): arch=c00000b7 syscall=56 success=yes exit=3 a0=ffffffffffffff9c a1=ffffd016997a a2=241 a3=1b6 items=1 ppid=4301 pid=4343 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:59:53.260000 audit: CWD cwd="/etc/service/enabled/bird/log" Feb 9 09:59:53.360883 kernel: audit: type=1307 audit(1707472793.260:296): cwd="/etc/service/enabled/bird/log" Feb 9 09:59:53.260000 audit: PATH item=0 name="/dev/fd/63" inode=28975 dev=00:0b mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 09:59:53.382874 kernel: audit: type=1302 audit(1707472793.260:296): item=0 name="/dev/fd/63" inode=28975 dev=00:0b mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 09:59:53.260000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Feb 9 09:59:53.401357 kernel: audit: type=1327 audit(1707472793.260:296): proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Feb 9 09:59:53.289000 audit[4355]: AVC avc: denied { write } for pid=4355 comm="tee" name="fd" dev="proc" ino=29015 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Feb 9 09:59:53.426690 kernel: audit: type=1400 audit(1707472793.289:297): avc: denied { write } for pid=4355 comm="tee" name="fd" dev="proc" ino=29015 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Feb 9 09:59:53.289000 audit[4355]: SYSCALL arch=c00000b7 syscall=56 success=yes exit=3 a0=ffffffffffffff9c a1=ffffe2d4a96a a2=241 a3=1b6 items=1 ppid=4313 pid=4355 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:59:53.454680 kernel: audit: type=1300 audit(1707472793.289:297): arch=c00000b7 syscall=56 success=yes exit=3 a0=ffffffffffffff9c a1=ffffe2d4a96a a2=241 a3=1b6 items=1 ppid=4313 pid=4355 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:59:53.289000 audit: CWD cwd="/etc/service/enabled/node-status-reporter/log" Feb 9 09:59:53.468069 kernel: audit: type=1307 audit(1707472793.289:297): cwd="/etc/service/enabled/node-status-reporter/log" Feb 9 09:59:53.289000 audit: PATH item=0 name="/dev/fd/63" inode=29009 dev=00:0b mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 09:59:53.486434 kernel: audit: type=1302 audit(1707472793.289:297): item=0 name="/dev/fd/63" inode=29009 dev=00:0b mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 09:59:53.289000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Feb 9 09:59:53.296000 audit[4353]: AVC avc: denied { write } for pid=4353 comm="tee" name="fd" dev="proc" ino=29020 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Feb 9 09:59:53.296000 audit[4353]: SYSCALL arch=c00000b7 syscall=56 success=yes exit=3 a0=ffffffffffffff9c a1=ffffcea3f97b a2=241 a3=1b6 items=1 ppid=4308 pid=4353 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:59:53.509389 kernel: audit: type=1327 audit(1707472793.289:297): proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Feb 9 09:59:53.296000 audit: CWD cwd="/etc/service/enabled/cni/log" Feb 9 09:59:53.296000 audit: PATH item=0 name="/dev/fd/63" inode=29008 dev=00:0b mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 09:59:53.296000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Feb 9 09:59:53.301000 audit[4363]: AVC avc: denied { write } for pid=4363 comm="tee" name="fd" dev="proc" ino=29026 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Feb 9 09:59:53.301000 audit[4363]: SYSCALL arch=c00000b7 syscall=56 success=yes exit=3 a0=ffffffffffffff9c a1=ffffd378c979 a2=241 a3=1b6 items=1 ppid=4302 pid=4363 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:59:53.301000 audit: CWD cwd="/etc/service/enabled/felix/log" Feb 9 09:59:53.301000 audit: PATH item=0 name="/dev/fd/63" inode=29938 dev=00:0b mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 09:59:53.301000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Feb 9 09:59:53.303000 audit[4357]: AVC avc: denied { write } for pid=4357 comm="tee" name="fd" dev="proc" ino=29030 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Feb 9 09:59:53.303000 audit[4357]: SYSCALL arch=c00000b7 syscall=56 success=yes exit=3 a0=ffffffffffffff9c a1=ffffd5f57969 a2=241 a3=1b6 items=1 ppid=4304 pid=4357 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:59:53.303000 audit: CWD cwd="/etc/service/enabled/allocate-tunnel-addrs/log" Feb 9 09:59:53.303000 audit: PATH item=0 name="/dev/fd/63" inode=29012 dev=00:0b mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 09:59:53.303000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Feb 9 09:59:53.329000 audit[4360]: AVC avc: denied { write } for pid=4360 comm="tee" name="fd" dev="proc" ino=29951 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Feb 9 09:59:53.329000 audit[4360]: SYSCALL arch=c00000b7 syscall=56 success=yes exit=3 a0=ffffffffffffff9c a1=ffffda497979 a2=241 a3=1b6 items=1 ppid=4311 pid=4360 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:59:53.329000 audit: CWD cwd="/etc/service/enabled/bird6/log" Feb 9 09:59:53.329000 audit: PATH item=0 name="/dev/fd/63" inode=29017 dev=00:0b mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 09:59:53.329000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Feb 9 09:59:53.355000 audit[4369]: AVC avc: denied { write } for pid=4369 comm="tee" name="fd" dev="proc" ino=29958 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Feb 9 09:59:53.355000 audit[4369]: SYSCALL arch=c00000b7 syscall=56 success=yes exit=3 a0=ffffffffffffff9c a1=fffff52ad979 a2=241 a3=1b6 items=1 ppid=4309 pid=4369 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:59:53.355000 audit: CWD cwd="/etc/service/enabled/confd/log" Feb 9 09:59:53.355000 audit: PATH item=0 name="/dev/fd/63" inode=29955 dev=00:0b mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 09:59:53.355000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Feb 9 09:59:53.883434 systemd[1]: run-containerd-runc-k8s.io-91d63ce072a914b20a5508b3e18aa2e28df7768cd29968d1054fb74323b1398c-runc.QaptVq.mount: Deactivated successfully. Feb 9 09:59:53.908000 audit[4460]: AVC avc: denied { bpf } for pid=4460 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 09:59:53.908000 audit[4460]: AVC avc: denied { bpf } for pid=4460 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 09:59:53.908000 audit[4460]: AVC avc: denied { perfmon } for pid=4460 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 09:59:53.908000 audit[4460]: AVC avc: denied { perfmon } for pid=4460 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 09:59:53.908000 audit[4460]: AVC avc: denied { perfmon } for pid=4460 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 09:59:53.908000 audit[4460]: AVC avc: denied { perfmon } for pid=4460 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 09:59:53.908000 audit[4460]: AVC avc: denied { perfmon } for pid=4460 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 09:59:53.908000 audit[4460]: AVC avc: denied { bpf } for pid=4460 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 09:59:53.908000 audit[4460]: AVC avc: denied { bpf } for pid=4460 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 09:59:53.908000 audit: BPF prog-id=10 op=LOAD Feb 9 09:59:53.908000 audit[4460]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=fffffba8a418 a2=70 a3=0 items=0 ppid=4307 pid=4460 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:59:53.908000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Feb 9 09:59:53.908000 audit: BPF prog-id=10 op=UNLOAD Feb 9 09:59:53.908000 audit[4460]: AVC avc: denied { bpf } for pid=4460 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 09:59:53.908000 audit[4460]: AVC avc: denied { bpf } for pid=4460 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 09:59:53.908000 audit[4460]: AVC avc: denied { perfmon } for pid=4460 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 09:59:53.908000 audit[4460]: AVC avc: denied { perfmon } for pid=4460 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 09:59:53.908000 audit[4460]: AVC avc: denied { perfmon } for pid=4460 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 09:59:53.908000 audit[4460]: AVC avc: denied { perfmon } for pid=4460 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 09:59:53.908000 audit[4460]: AVC avc: denied { perfmon } for pid=4460 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 09:59:53.908000 audit[4460]: AVC avc: denied { bpf } for pid=4460 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 09:59:53.908000 audit[4460]: AVC avc: denied { bpf } for pid=4460 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 09:59:53.908000 audit: BPF prog-id=11 op=LOAD Feb 9 09:59:53.908000 audit[4460]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=fffffba8a418 a2=70 a3=4a174c items=0 ppid=4307 pid=4460 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:59:53.908000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Feb 9 09:59:53.908000 audit: BPF prog-id=11 op=UNLOAD Feb 9 09:59:53.908000 audit[4460]: AVC avc: denied { bpf } for pid=4460 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 09:59:53.908000 audit[4460]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=0 a1=fffffba8a448 a2=70 a3=3b8f779f items=0 ppid=4307 pid=4460 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:59:53.908000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Feb 9 09:59:53.908000 audit[4460]: AVC avc: denied { bpf } for pid=4460 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 09:59:53.908000 audit[4460]: AVC avc: denied { bpf } for pid=4460 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 09:59:53.908000 audit[4460]: AVC avc: denied { bpf } for pid=4460 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 09:59:53.908000 audit[4460]: AVC avc: denied { perfmon } for pid=4460 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 09:59:53.908000 audit[4460]: AVC avc: denied { perfmon } for pid=4460 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 09:59:53.908000 audit[4460]: AVC avc: denied { perfmon } for pid=4460 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 09:59:53.908000 audit[4460]: AVC avc: denied { perfmon } for pid=4460 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 09:59:53.908000 audit[4460]: AVC avc: denied { perfmon } for pid=4460 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 09:59:53.908000 audit[4460]: AVC avc: denied { bpf } for pid=4460 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 09:59:53.908000 audit[4460]: AVC avc: denied { bpf } for pid=4460 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 09:59:53.908000 audit: BPF prog-id=12 op=LOAD Feb 9 09:59:53.908000 audit[4460]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=5 a1=fffffba8a398 a2=70 a3=3b8f77b9 items=0 ppid=4307 pid=4460 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:59:53.908000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Feb 9 09:59:53.916000 audit[4462]: AVC avc: denied { bpf } for pid=4462 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 09:59:53.916000 audit[4462]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=0 a0=f a1=ffffc758d628 a2=70 a3=0 items=0 ppid=4307 pid=4462 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:59:53.916000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Feb 9 09:59:53.916000 audit[4462]: AVC avc: denied { bpf } for pid=4462 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 09:59:53.916000 audit[4462]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=0 a0=f a1=ffffc758d508 a2=70 a3=2 items=0 ppid=4307 pid=4462 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:59:53.916000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Feb 9 09:59:53.923000 audit: BPF prog-id=12 op=UNLOAD Feb 9 09:59:53.923000 audit[1252]: SYSCALL arch=c00000b7 syscall=220 success=yes exit=4465 a0=1200011 a1=0 a2=0 a3=0 items=0 ppid=1 pid=1252 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-udevd" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:59:53.923000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-udevd" Feb 9 09:59:54.031000 audit[4489]: NETFILTER_CFG table=raw:123 family=2 entries=19 op=nft_register_chain pid=4489 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Feb 9 09:59:54.031000 audit[4489]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=6132 a0=3 a1=ffffded003f0 a2=0 a3=ffff9b701fa8 items=0 ppid=4307 pid=4489 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:59:54.031000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Feb 9 09:59:54.044000 audit[4490]: NETFILTER_CFG table=nat:124 family=2 entries=16 op=nft_register_chain pid=4490 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Feb 9 09:59:54.044000 audit[4490]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5188 a0=3 a1=fffff2376330 a2=0 a3=ffff9d354fa8 items=0 ppid=4307 pid=4490 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:59:54.044000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Feb 9 09:59:54.045000 audit[4493]: NETFILTER_CFG table=mangle:125 family=2 entries=19 op=nft_register_chain pid=4493 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Feb 9 09:59:54.045000 audit[4493]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=6800 a0=3 a1=ffffca7a3900 a2=0 a3=ffff843fdfa8 items=0 ppid=4307 pid=4493 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:59:54.045000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Feb 9 09:59:54.049000 audit[4492]: NETFILTER_CFG table=filter:126 family=2 entries=39 op=nft_register_chain pid=4492 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Feb 9 09:59:54.049000 audit[4492]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=18472 a0=3 a1=fffff41b3900 a2=0 a3=ffffb02c3fa8 items=0 ppid=4307 pid=4492 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:59:54.049000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Feb 9 09:59:54.752411 systemd-networkd[1590]: vxlan.calico: Link UP Feb 9 09:59:54.752418 systemd-networkd[1590]: vxlan.calico: Gained carrier Feb 9 09:59:55.540981 env[1448]: time="2024-02-09T09:59:55.540935605Z" level=info msg="StopPodSandbox for \"6de8df183dc36f2008f8e0c428cedb49e0617045e5090d041fd1907e127148e6\"" Feb 9 09:59:55.603995 kubelet[2631]: I0209 09:59:55.603612 2631 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-node-hxw57" podStartSLOduration=-9.223371986251204e+09 pod.CreationTimestamp="2024-02-09 09:59:05 +0000 UTC" firstStartedPulling="2024-02-09 09:59:07.663396082 +0000 UTC m=+52.527580408" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 09:59:52.877417314 +0000 UTC m=+97.741601600" watchObservedRunningTime="2024-02-09 09:59:55.60357266 +0000 UTC m=+100.467756986" Feb 9 09:59:55.636983 env[1448]: 2024-02-09 09:59:55.603 [INFO][4517] k8s.go 578: Cleaning up netns ContainerID="6de8df183dc36f2008f8e0c428cedb49e0617045e5090d041fd1907e127148e6" Feb 9 09:59:55.636983 env[1448]: 2024-02-09 09:59:55.603 [INFO][4517] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="6de8df183dc36f2008f8e0c428cedb49e0617045e5090d041fd1907e127148e6" iface="eth0" netns="/var/run/netns/cni-cec70352-a4bd-eeb7-7f94-1660931b961e" Feb 9 09:59:55.636983 env[1448]: 2024-02-09 09:59:55.603 [INFO][4517] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="6de8df183dc36f2008f8e0c428cedb49e0617045e5090d041fd1907e127148e6" iface="eth0" netns="/var/run/netns/cni-cec70352-a4bd-eeb7-7f94-1660931b961e" Feb 9 09:59:55.636983 env[1448]: 2024-02-09 09:59:55.604 [INFO][4517] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="6de8df183dc36f2008f8e0c428cedb49e0617045e5090d041fd1907e127148e6" iface="eth0" netns="/var/run/netns/cni-cec70352-a4bd-eeb7-7f94-1660931b961e" Feb 9 09:59:55.636983 env[1448]: 2024-02-09 09:59:55.604 [INFO][4517] k8s.go 585: Releasing IP address(es) ContainerID="6de8df183dc36f2008f8e0c428cedb49e0617045e5090d041fd1907e127148e6" Feb 9 09:59:55.636983 env[1448]: 2024-02-09 09:59:55.604 [INFO][4517] utils.go 188: Calico CNI releasing IP address ContainerID="6de8df183dc36f2008f8e0c428cedb49e0617045e5090d041fd1907e127148e6" Feb 9 09:59:55.636983 env[1448]: 2024-02-09 09:59:55.621 [INFO][4523] ipam_plugin.go 415: Releasing address using handleID ContainerID="6de8df183dc36f2008f8e0c428cedb49e0617045e5090d041fd1907e127148e6" HandleID="k8s-pod-network.6de8df183dc36f2008f8e0c428cedb49e0617045e5090d041fd1907e127148e6" Workload="ci--3510.3.2--a--d10cdd880c-k8s-coredns--787d4945fb--cd67q-eth0" Feb 9 09:59:55.636983 env[1448]: 2024-02-09 09:59:55.621 [INFO][4523] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 9 09:59:55.636983 env[1448]: 2024-02-09 09:59:55.621 [INFO][4523] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 9 09:59:55.636983 env[1448]: 2024-02-09 09:59:55.630 [WARNING][4523] ipam_plugin.go 432: Asked to release address but it doesn't exist. Ignoring ContainerID="6de8df183dc36f2008f8e0c428cedb49e0617045e5090d041fd1907e127148e6" HandleID="k8s-pod-network.6de8df183dc36f2008f8e0c428cedb49e0617045e5090d041fd1907e127148e6" Workload="ci--3510.3.2--a--d10cdd880c-k8s-coredns--787d4945fb--cd67q-eth0" Feb 9 09:59:55.636983 env[1448]: 2024-02-09 09:59:55.631 [INFO][4523] ipam_plugin.go 443: Releasing address using workloadID ContainerID="6de8df183dc36f2008f8e0c428cedb49e0617045e5090d041fd1907e127148e6" HandleID="k8s-pod-network.6de8df183dc36f2008f8e0c428cedb49e0617045e5090d041fd1907e127148e6" Workload="ci--3510.3.2--a--d10cdd880c-k8s-coredns--787d4945fb--cd67q-eth0" Feb 9 09:59:55.636983 env[1448]: 2024-02-09 09:59:55.632 [INFO][4523] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 9 09:59:55.636983 env[1448]: 2024-02-09 09:59:55.635 [INFO][4517] k8s.go 591: Teardown processing complete. ContainerID="6de8df183dc36f2008f8e0c428cedb49e0617045e5090d041fd1907e127148e6" Feb 9 09:59:55.639985 env[1448]: time="2024-02-09T09:59:55.639605996Z" level=info msg="TearDown network for sandbox \"6de8df183dc36f2008f8e0c428cedb49e0617045e5090d041fd1907e127148e6\" successfully" Feb 9 09:59:55.639985 env[1448]: time="2024-02-09T09:59:55.639672041Z" level=info msg="StopPodSandbox for \"6de8df183dc36f2008f8e0c428cedb49e0617045e5090d041fd1907e127148e6\" returns successfully" Feb 9 09:59:55.639039 systemd[1]: run-netns-cni\x2dcec70352\x2da4bd\x2deeb7\x2d7f94\x2d1660931b961e.mount: Deactivated successfully. Feb 9 09:59:55.640226 env[1448]: time="2024-02-09T09:59:55.640114156Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-787d4945fb-cd67q,Uid:f6e0c31d-dc6d-4ba7-8de2-7860fda55d58,Namespace:kube-system,Attempt:1,}" Feb 9 09:59:55.829192 systemd-networkd[1590]: cali73f928224c1: Link UP Feb 9 09:59:55.842352 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Feb 9 09:59:55.842465 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali73f928224c1: link becomes ready Feb 9 09:59:55.842959 systemd-networkd[1590]: cali73f928224c1: Gained carrier Feb 9 09:59:55.860634 env[1448]: 2024-02-09 09:59:55.755 [INFO][4529] plugin.go 327: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3510.3.2--a--d10cdd880c-k8s-coredns--787d4945fb--cd67q-eth0 coredns-787d4945fb- kube-system f6e0c31d-dc6d-4ba7-8de2-7860fda55d58 876 0 2024-02-09 09:58:30 +0000 UTC map[k8s-app:kube-dns pod-template-hash:787d4945fb projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-3510.3.2-a-d10cdd880c coredns-787d4945fb-cd67q eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali73f928224c1 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="1f023ddaa8125371f847333c74154d471b9462ad992af6d474f91caac4788ddf" Namespace="kube-system" Pod="coredns-787d4945fb-cd67q" WorkloadEndpoint="ci--3510.3.2--a--d10cdd880c-k8s-coredns--787d4945fb--cd67q-" Feb 9 09:59:55.860634 env[1448]: 2024-02-09 09:59:55.755 [INFO][4529] k8s.go 76: Extracted identifiers for CmdAddK8s ContainerID="1f023ddaa8125371f847333c74154d471b9462ad992af6d474f91caac4788ddf" Namespace="kube-system" Pod="coredns-787d4945fb-cd67q" WorkloadEndpoint="ci--3510.3.2--a--d10cdd880c-k8s-coredns--787d4945fb--cd67q-eth0" Feb 9 09:59:55.860634 env[1448]: 2024-02-09 09:59:55.784 [INFO][4543] ipam_plugin.go 228: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1f023ddaa8125371f847333c74154d471b9462ad992af6d474f91caac4788ddf" HandleID="k8s-pod-network.1f023ddaa8125371f847333c74154d471b9462ad992af6d474f91caac4788ddf" Workload="ci--3510.3.2--a--d10cdd880c-k8s-coredns--787d4945fb--cd67q-eth0" Feb 9 09:59:55.860634 env[1448]: 2024-02-09 09:59:55.796 [INFO][4543] ipam_plugin.go 268: Auto assigning IP ContainerID="1f023ddaa8125371f847333c74154d471b9462ad992af6d474f91caac4788ddf" HandleID="k8s-pod-network.1f023ddaa8125371f847333c74154d471b9462ad992af6d474f91caac4788ddf" Workload="ci--3510.3.2--a--d10cdd880c-k8s-coredns--787d4945fb--cd67q-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400027b9c0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-3510.3.2-a-d10cdd880c", "pod":"coredns-787d4945fb-cd67q", "timestamp":"2024-02-09 09:59:55.784174454 +0000 UTC"}, Hostname:"ci-3510.3.2-a-d10cdd880c", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 9 09:59:55.860634 env[1448]: 2024-02-09 09:59:55.796 [INFO][4543] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 9 09:59:55.860634 env[1448]: 2024-02-09 09:59:55.796 [INFO][4543] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 9 09:59:55.860634 env[1448]: 2024-02-09 09:59:55.796 [INFO][4543] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3510.3.2-a-d10cdd880c' Feb 9 09:59:55.860634 env[1448]: 2024-02-09 09:59:55.798 [INFO][4543] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.1f023ddaa8125371f847333c74154d471b9462ad992af6d474f91caac4788ddf" host="ci-3510.3.2-a-d10cdd880c" Feb 9 09:59:55.860634 env[1448]: 2024-02-09 09:59:55.807 [INFO][4543] ipam.go 372: Looking up existing affinities for host host="ci-3510.3.2-a-d10cdd880c" Feb 9 09:59:55.860634 env[1448]: 2024-02-09 09:59:55.811 [INFO][4543] ipam.go 489: Trying affinity for 192.168.73.64/26 host="ci-3510.3.2-a-d10cdd880c" Feb 9 09:59:55.860634 env[1448]: 2024-02-09 09:59:55.813 [INFO][4543] ipam.go 155: Attempting to load block cidr=192.168.73.64/26 host="ci-3510.3.2-a-d10cdd880c" Feb 9 09:59:55.860634 env[1448]: 2024-02-09 09:59:55.815 [INFO][4543] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.73.64/26 host="ci-3510.3.2-a-d10cdd880c" Feb 9 09:59:55.860634 env[1448]: 2024-02-09 09:59:55.815 [INFO][4543] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.73.64/26 handle="k8s-pod-network.1f023ddaa8125371f847333c74154d471b9462ad992af6d474f91caac4788ddf" host="ci-3510.3.2-a-d10cdd880c" Feb 9 09:59:55.860634 env[1448]: 2024-02-09 09:59:55.816 [INFO][4543] ipam.go 1682: Creating new handle: k8s-pod-network.1f023ddaa8125371f847333c74154d471b9462ad992af6d474f91caac4788ddf Feb 9 09:59:55.860634 env[1448]: 2024-02-09 09:59:55.820 [INFO][4543] ipam.go 1203: Writing block in order to claim IPs block=192.168.73.64/26 handle="k8s-pod-network.1f023ddaa8125371f847333c74154d471b9462ad992af6d474f91caac4788ddf" host="ci-3510.3.2-a-d10cdd880c" Feb 9 09:59:55.860634 env[1448]: 2024-02-09 09:59:55.825 [INFO][4543] ipam.go 1216: Successfully claimed IPs: [192.168.73.65/26] block=192.168.73.64/26 handle="k8s-pod-network.1f023ddaa8125371f847333c74154d471b9462ad992af6d474f91caac4788ddf" host="ci-3510.3.2-a-d10cdd880c" Feb 9 09:59:55.860634 env[1448]: 2024-02-09 09:59:55.825 [INFO][4543] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.73.65/26] handle="k8s-pod-network.1f023ddaa8125371f847333c74154d471b9462ad992af6d474f91caac4788ddf" host="ci-3510.3.2-a-d10cdd880c" Feb 9 09:59:55.860634 env[1448]: 2024-02-09 09:59:55.825 [INFO][4543] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 9 09:59:55.860634 env[1448]: 2024-02-09 09:59:55.825 [INFO][4543] ipam_plugin.go 286: Calico CNI IPAM assigned addresses IPv4=[192.168.73.65/26] IPv6=[] ContainerID="1f023ddaa8125371f847333c74154d471b9462ad992af6d474f91caac4788ddf" HandleID="k8s-pod-network.1f023ddaa8125371f847333c74154d471b9462ad992af6d474f91caac4788ddf" Workload="ci--3510.3.2--a--d10cdd880c-k8s-coredns--787d4945fb--cd67q-eth0" Feb 9 09:59:55.861369 env[1448]: 2024-02-09 09:59:55.827 [INFO][4529] k8s.go 385: Populated endpoint ContainerID="1f023ddaa8125371f847333c74154d471b9462ad992af6d474f91caac4788ddf" Namespace="kube-system" Pod="coredns-787d4945fb-cd67q" WorkloadEndpoint="ci--3510.3.2--a--d10cdd880c-k8s-coredns--787d4945fb--cd67q-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.2--a--d10cdd880c-k8s-coredns--787d4945fb--cd67q-eth0", GenerateName:"coredns-787d4945fb-", Namespace:"kube-system", SelfLink:"", UID:"f6e0c31d-dc6d-4ba7-8de2-7860fda55d58", ResourceVersion:"876", Generation:0, CreationTimestamp:time.Date(2024, time.February, 9, 9, 58, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"787d4945fb", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.2-a-d10cdd880c", ContainerID:"", Pod:"coredns-787d4945fb-cd67q", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.73.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali73f928224c1", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 9 09:59:55.861369 env[1448]: 2024-02-09 09:59:55.827 [INFO][4529] k8s.go 386: Calico CNI using IPs: [192.168.73.65/32] ContainerID="1f023ddaa8125371f847333c74154d471b9462ad992af6d474f91caac4788ddf" Namespace="kube-system" Pod="coredns-787d4945fb-cd67q" WorkloadEndpoint="ci--3510.3.2--a--d10cdd880c-k8s-coredns--787d4945fb--cd67q-eth0" Feb 9 09:59:55.861369 env[1448]: 2024-02-09 09:59:55.827 [INFO][4529] dataplane_linux.go 68: Setting the host side veth name to cali73f928224c1 ContainerID="1f023ddaa8125371f847333c74154d471b9462ad992af6d474f91caac4788ddf" Namespace="kube-system" Pod="coredns-787d4945fb-cd67q" WorkloadEndpoint="ci--3510.3.2--a--d10cdd880c-k8s-coredns--787d4945fb--cd67q-eth0" Feb 9 09:59:55.861369 env[1448]: 2024-02-09 09:59:55.844 [INFO][4529] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="1f023ddaa8125371f847333c74154d471b9462ad992af6d474f91caac4788ddf" Namespace="kube-system" Pod="coredns-787d4945fb-cd67q" WorkloadEndpoint="ci--3510.3.2--a--d10cdd880c-k8s-coredns--787d4945fb--cd67q-eth0" Feb 9 09:59:55.861369 env[1448]: 2024-02-09 09:59:55.844 [INFO][4529] k8s.go 413: Added Mac, interface name, and active container ID to endpoint ContainerID="1f023ddaa8125371f847333c74154d471b9462ad992af6d474f91caac4788ddf" Namespace="kube-system" Pod="coredns-787d4945fb-cd67q" WorkloadEndpoint="ci--3510.3.2--a--d10cdd880c-k8s-coredns--787d4945fb--cd67q-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.2--a--d10cdd880c-k8s-coredns--787d4945fb--cd67q-eth0", GenerateName:"coredns-787d4945fb-", Namespace:"kube-system", SelfLink:"", UID:"f6e0c31d-dc6d-4ba7-8de2-7860fda55d58", ResourceVersion:"876", Generation:0, CreationTimestamp:time.Date(2024, time.February, 9, 9, 58, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"787d4945fb", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.2-a-d10cdd880c", ContainerID:"1f023ddaa8125371f847333c74154d471b9462ad992af6d474f91caac4788ddf", Pod:"coredns-787d4945fb-cd67q", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.73.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali73f928224c1", MAC:"e2:b6:07:ab:f8:d8", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 9 09:59:55.861369 env[1448]: 2024-02-09 09:59:55.857 [INFO][4529] k8s.go 491: Wrote updated endpoint to datastore ContainerID="1f023ddaa8125371f847333c74154d471b9462ad992af6d474f91caac4788ddf" Namespace="kube-system" Pod="coredns-787d4945fb-cd67q" WorkloadEndpoint="ci--3510.3.2--a--d10cdd880c-k8s-coredns--787d4945fb--cd67q-eth0" Feb 9 09:59:55.876000 audit[4565]: NETFILTER_CFG table=filter:127 family=2 entries=36 op=nft_register_chain pid=4565 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Feb 9 09:59:55.876000 audit[4565]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=19908 a0=3 a1=ffffd05bc100 a2=0 a3=ffffa00f5fa8 items=0 ppid=4307 pid=4565 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:59:55.876000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Feb 9 09:59:55.903811 env[1448]: time="2024-02-09T09:59:55.903734557Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 09:59:55.903924 env[1448]: time="2024-02-09T09:59:55.903817483Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 09:59:55.903924 env[1448]: time="2024-02-09T09:59:55.903858487Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 09:59:55.904114 env[1448]: time="2024-02-09T09:59:55.904057622Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/1f023ddaa8125371f847333c74154d471b9462ad992af6d474f91caac4788ddf pid=4573 runtime=io.containerd.runc.v2 Feb 9 09:59:55.961598 env[1448]: time="2024-02-09T09:59:55.961544594Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-787d4945fb-cd67q,Uid:f6e0c31d-dc6d-4ba7-8de2-7860fda55d58,Namespace:kube-system,Attempt:1,} returns sandbox id \"1f023ddaa8125371f847333c74154d471b9462ad992af6d474f91caac4788ddf\"" Feb 9 09:59:55.966616 env[1448]: time="2024-02-09T09:59:55.966569387Z" level=info msg="CreateContainer within sandbox \"1f023ddaa8125371f847333c74154d471b9462ad992af6d474f91caac4788ddf\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 9 09:59:56.006573 env[1448]: time="2024-02-09T09:59:56.006518464Z" level=info msg="CreateContainer within sandbox \"1f023ddaa8125371f847333c74154d471b9462ad992af6d474f91caac4788ddf\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"0fc5c91b330b9522572b60334d188954605b07c6b0cbaca91e69d3e347233052\"" Feb 9 09:59:56.009277 env[1448]: time="2024-02-09T09:59:56.009128985Z" level=info msg="StartContainer for \"0fc5c91b330b9522572b60334d188954605b07c6b0cbaca91e69d3e347233052\"" Feb 9 09:59:56.063204 env[1448]: time="2024-02-09T09:59:56.063143513Z" level=info msg="StartContainer for \"0fc5c91b330b9522572b60334d188954605b07c6b0cbaca91e69d3e347233052\" returns successfully" Feb 9 09:59:56.539539 env[1448]: time="2024-02-09T09:59:56.539452308Z" level=info msg="StopPodSandbox for \"c1fe3348e28dbac7ff2cee1d7c38896695ba2af371345c60136ab6c6b466806f\"" Feb 9 09:59:56.611251 env[1448]: 2024-02-09 09:59:56.580 [INFO][4660] k8s.go 578: Cleaning up netns ContainerID="c1fe3348e28dbac7ff2cee1d7c38896695ba2af371345c60136ab6c6b466806f" Feb 9 09:59:56.611251 env[1448]: 2024-02-09 09:59:56.580 [INFO][4660] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="c1fe3348e28dbac7ff2cee1d7c38896695ba2af371345c60136ab6c6b466806f" iface="eth0" netns="/var/run/netns/cni-68e6834f-3c39-7ae5-e356-ce3d2e5dde06" Feb 9 09:59:56.611251 env[1448]: 2024-02-09 09:59:56.581 [INFO][4660] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="c1fe3348e28dbac7ff2cee1d7c38896695ba2af371345c60136ab6c6b466806f" iface="eth0" netns="/var/run/netns/cni-68e6834f-3c39-7ae5-e356-ce3d2e5dde06" Feb 9 09:59:56.611251 env[1448]: 2024-02-09 09:59:56.581 [INFO][4660] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="c1fe3348e28dbac7ff2cee1d7c38896695ba2af371345c60136ab6c6b466806f" iface="eth0" netns="/var/run/netns/cni-68e6834f-3c39-7ae5-e356-ce3d2e5dde06" Feb 9 09:59:56.611251 env[1448]: 2024-02-09 09:59:56.581 [INFO][4660] k8s.go 585: Releasing IP address(es) ContainerID="c1fe3348e28dbac7ff2cee1d7c38896695ba2af371345c60136ab6c6b466806f" Feb 9 09:59:56.611251 env[1448]: 2024-02-09 09:59:56.581 [INFO][4660] utils.go 188: Calico CNI releasing IP address ContainerID="c1fe3348e28dbac7ff2cee1d7c38896695ba2af371345c60136ab6c6b466806f" Feb 9 09:59:56.611251 env[1448]: 2024-02-09 09:59:56.597 [INFO][4666] ipam_plugin.go 415: Releasing address using handleID ContainerID="c1fe3348e28dbac7ff2cee1d7c38896695ba2af371345c60136ab6c6b466806f" HandleID="k8s-pod-network.c1fe3348e28dbac7ff2cee1d7c38896695ba2af371345c60136ab6c6b466806f" Workload="ci--3510.3.2--a--d10cdd880c-k8s-coredns--787d4945fb--z55w7-eth0" Feb 9 09:59:56.611251 env[1448]: 2024-02-09 09:59:56.598 [INFO][4666] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 9 09:59:56.611251 env[1448]: 2024-02-09 09:59:56.598 [INFO][4666] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 9 09:59:56.611251 env[1448]: 2024-02-09 09:59:56.607 [WARNING][4666] ipam_plugin.go 432: Asked to release address but it doesn't exist. Ignoring ContainerID="c1fe3348e28dbac7ff2cee1d7c38896695ba2af371345c60136ab6c6b466806f" HandleID="k8s-pod-network.c1fe3348e28dbac7ff2cee1d7c38896695ba2af371345c60136ab6c6b466806f" Workload="ci--3510.3.2--a--d10cdd880c-k8s-coredns--787d4945fb--z55w7-eth0" Feb 9 09:59:56.611251 env[1448]: 2024-02-09 09:59:56.607 [INFO][4666] ipam_plugin.go 443: Releasing address using workloadID ContainerID="c1fe3348e28dbac7ff2cee1d7c38896695ba2af371345c60136ab6c6b466806f" HandleID="k8s-pod-network.c1fe3348e28dbac7ff2cee1d7c38896695ba2af371345c60136ab6c6b466806f" Workload="ci--3510.3.2--a--d10cdd880c-k8s-coredns--787d4945fb--z55w7-eth0" Feb 9 09:59:56.611251 env[1448]: 2024-02-09 09:59:56.608 [INFO][4666] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 9 09:59:56.611251 env[1448]: 2024-02-09 09:59:56.610 [INFO][4660] k8s.go 591: Teardown processing complete. ContainerID="c1fe3348e28dbac7ff2cee1d7c38896695ba2af371345c60136ab6c6b466806f" Feb 9 09:59:56.611916 env[1448]: time="2024-02-09T09:59:56.611429422Z" level=info msg="TearDown network for sandbox \"c1fe3348e28dbac7ff2cee1d7c38896695ba2af371345c60136ab6c6b466806f\" successfully" Feb 9 09:59:56.611916 env[1448]: time="2024-02-09T09:59:56.611469305Z" level=info msg="StopPodSandbox for \"c1fe3348e28dbac7ff2cee1d7c38896695ba2af371345c60136ab6c6b466806f\" returns successfully" Feb 9 09:59:56.612491 env[1448]: time="2024-02-09T09:59:56.612463222Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-787d4945fb-z55w7,Uid:ba7e5bc4-13aa-4925-8a28-9dfe806f0e3b,Namespace:kube-system,Attempt:1,}" Feb 9 09:59:56.639838 systemd[1]: run-netns-cni\x2d68e6834f\x2d3c39\x2d7ae5\x2de356\x2dce3d2e5dde06.mount: Deactivated successfully. Feb 9 09:59:56.729513 systemd-networkd[1590]: vxlan.calico: Gained IPv6LL Feb 9 09:59:56.779010 systemd-networkd[1590]: cali751b1c81fba: Link UP Feb 9 09:59:56.788969 systemd-networkd[1590]: cali751b1c81fba: Gained carrier Feb 9 09:59:56.789712 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali751b1c81fba: link becomes ready Feb 9 09:59:56.798460 env[1448]: 2024-02-09 09:59:56.694 [INFO][4672] plugin.go 327: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3510.3.2--a--d10cdd880c-k8s-coredns--787d4945fb--z55w7-eth0 coredns-787d4945fb- kube-system ba7e5bc4-13aa-4925-8a28-9dfe806f0e3b 887 0 2024-02-09 09:58:30 +0000 UTC map[k8s-app:kube-dns pod-template-hash:787d4945fb projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-3510.3.2-a-d10cdd880c coredns-787d4945fb-z55w7 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali751b1c81fba [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="de75578717a30796d2c89b5383300e1263c67d3cf095aa4b007c71ab770c60aa" Namespace="kube-system" Pod="coredns-787d4945fb-z55w7" WorkloadEndpoint="ci--3510.3.2--a--d10cdd880c-k8s-coredns--787d4945fb--z55w7-" Feb 9 09:59:56.798460 env[1448]: 2024-02-09 09:59:56.695 [INFO][4672] k8s.go 76: Extracted identifiers for CmdAddK8s ContainerID="de75578717a30796d2c89b5383300e1263c67d3cf095aa4b007c71ab770c60aa" Namespace="kube-system" Pod="coredns-787d4945fb-z55w7" WorkloadEndpoint="ci--3510.3.2--a--d10cdd880c-k8s-coredns--787d4945fb--z55w7-eth0" Feb 9 09:59:56.798460 env[1448]: 2024-02-09 09:59:56.728 [INFO][4684] ipam_plugin.go 228: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="de75578717a30796d2c89b5383300e1263c67d3cf095aa4b007c71ab770c60aa" HandleID="k8s-pod-network.de75578717a30796d2c89b5383300e1263c67d3cf095aa4b007c71ab770c60aa" Workload="ci--3510.3.2--a--d10cdd880c-k8s-coredns--787d4945fb--z55w7-eth0" Feb 9 09:59:56.798460 env[1448]: 2024-02-09 09:59:56.742 [INFO][4684] ipam_plugin.go 268: Auto assigning IP ContainerID="de75578717a30796d2c89b5383300e1263c67d3cf095aa4b007c71ab770c60aa" HandleID="k8s-pod-network.de75578717a30796d2c89b5383300e1263c67d3cf095aa4b007c71ab770c60aa" Workload="ci--3510.3.2--a--d10cdd880c-k8s-coredns--787d4945fb--z55w7-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002bca40), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-3510.3.2-a-d10cdd880c", "pod":"coredns-787d4945fb-z55w7", "timestamp":"2024-02-09 09:59:56.728773317 +0000 UTC"}, Hostname:"ci-3510.3.2-a-d10cdd880c", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 9 09:59:56.798460 env[1448]: 2024-02-09 09:59:56.742 [INFO][4684] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 9 09:59:56.798460 env[1448]: 2024-02-09 09:59:56.742 [INFO][4684] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 9 09:59:56.798460 env[1448]: 2024-02-09 09:59:56.742 [INFO][4684] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3510.3.2-a-d10cdd880c' Feb 9 09:59:56.798460 env[1448]: 2024-02-09 09:59:56.744 [INFO][4684] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.de75578717a30796d2c89b5383300e1263c67d3cf095aa4b007c71ab770c60aa" host="ci-3510.3.2-a-d10cdd880c" Feb 9 09:59:56.798460 env[1448]: 2024-02-09 09:59:56.748 [INFO][4684] ipam.go 372: Looking up existing affinities for host host="ci-3510.3.2-a-d10cdd880c" Feb 9 09:59:56.798460 env[1448]: 2024-02-09 09:59:56.751 [INFO][4684] ipam.go 489: Trying affinity for 192.168.73.64/26 host="ci-3510.3.2-a-d10cdd880c" Feb 9 09:59:56.798460 env[1448]: 2024-02-09 09:59:56.753 [INFO][4684] ipam.go 155: Attempting to load block cidr=192.168.73.64/26 host="ci-3510.3.2-a-d10cdd880c" Feb 9 09:59:56.798460 env[1448]: 2024-02-09 09:59:56.755 [INFO][4684] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.73.64/26 host="ci-3510.3.2-a-d10cdd880c" Feb 9 09:59:56.798460 env[1448]: 2024-02-09 09:59:56.755 [INFO][4684] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.73.64/26 handle="k8s-pod-network.de75578717a30796d2c89b5383300e1263c67d3cf095aa4b007c71ab770c60aa" host="ci-3510.3.2-a-d10cdd880c" Feb 9 09:59:56.798460 env[1448]: 2024-02-09 09:59:56.756 [INFO][4684] ipam.go 1682: Creating new handle: k8s-pod-network.de75578717a30796d2c89b5383300e1263c67d3cf095aa4b007c71ab770c60aa Feb 9 09:59:56.798460 env[1448]: 2024-02-09 09:59:56.759 [INFO][4684] ipam.go 1203: Writing block in order to claim IPs block=192.168.73.64/26 handle="k8s-pod-network.de75578717a30796d2c89b5383300e1263c67d3cf095aa4b007c71ab770c60aa" host="ci-3510.3.2-a-d10cdd880c" Feb 9 09:59:56.798460 env[1448]: 2024-02-09 09:59:56.765 [INFO][4684] ipam.go 1216: Successfully claimed IPs: [192.168.73.66/26] block=192.168.73.64/26 handle="k8s-pod-network.de75578717a30796d2c89b5383300e1263c67d3cf095aa4b007c71ab770c60aa" host="ci-3510.3.2-a-d10cdd880c" Feb 9 09:59:56.798460 env[1448]: 2024-02-09 09:59:56.766 [INFO][4684] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.73.66/26] handle="k8s-pod-network.de75578717a30796d2c89b5383300e1263c67d3cf095aa4b007c71ab770c60aa" host="ci-3510.3.2-a-d10cdd880c" Feb 9 09:59:56.798460 env[1448]: 2024-02-09 09:59:56.766 [INFO][4684] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 9 09:59:56.798460 env[1448]: 2024-02-09 09:59:56.766 [INFO][4684] ipam_plugin.go 286: Calico CNI IPAM assigned addresses IPv4=[192.168.73.66/26] IPv6=[] ContainerID="de75578717a30796d2c89b5383300e1263c67d3cf095aa4b007c71ab770c60aa" HandleID="k8s-pod-network.de75578717a30796d2c89b5383300e1263c67d3cf095aa4b007c71ab770c60aa" Workload="ci--3510.3.2--a--d10cdd880c-k8s-coredns--787d4945fb--z55w7-eth0" Feb 9 09:59:56.799005 env[1448]: 2024-02-09 09:59:56.768 [INFO][4672] k8s.go 385: Populated endpoint ContainerID="de75578717a30796d2c89b5383300e1263c67d3cf095aa4b007c71ab770c60aa" Namespace="kube-system" Pod="coredns-787d4945fb-z55w7" WorkloadEndpoint="ci--3510.3.2--a--d10cdd880c-k8s-coredns--787d4945fb--z55w7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.2--a--d10cdd880c-k8s-coredns--787d4945fb--z55w7-eth0", GenerateName:"coredns-787d4945fb-", Namespace:"kube-system", SelfLink:"", UID:"ba7e5bc4-13aa-4925-8a28-9dfe806f0e3b", ResourceVersion:"887", Generation:0, CreationTimestamp:time.Date(2024, time.February, 9, 9, 58, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"787d4945fb", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.2-a-d10cdd880c", ContainerID:"", Pod:"coredns-787d4945fb-z55w7", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.73.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali751b1c81fba", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 9 09:59:56.799005 env[1448]: 2024-02-09 09:59:56.768 [INFO][4672] k8s.go 386: Calico CNI using IPs: [192.168.73.66/32] ContainerID="de75578717a30796d2c89b5383300e1263c67d3cf095aa4b007c71ab770c60aa" Namespace="kube-system" Pod="coredns-787d4945fb-z55w7" WorkloadEndpoint="ci--3510.3.2--a--d10cdd880c-k8s-coredns--787d4945fb--z55w7-eth0" Feb 9 09:59:56.799005 env[1448]: 2024-02-09 09:59:56.768 [INFO][4672] dataplane_linux.go 68: Setting the host side veth name to cali751b1c81fba ContainerID="de75578717a30796d2c89b5383300e1263c67d3cf095aa4b007c71ab770c60aa" Namespace="kube-system" Pod="coredns-787d4945fb-z55w7" WorkloadEndpoint="ci--3510.3.2--a--d10cdd880c-k8s-coredns--787d4945fb--z55w7-eth0" Feb 9 09:59:56.799005 env[1448]: 2024-02-09 09:59:56.788 [INFO][4672] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="de75578717a30796d2c89b5383300e1263c67d3cf095aa4b007c71ab770c60aa" Namespace="kube-system" Pod="coredns-787d4945fb-z55w7" WorkloadEndpoint="ci--3510.3.2--a--d10cdd880c-k8s-coredns--787d4945fb--z55w7-eth0" Feb 9 09:59:56.799005 env[1448]: 2024-02-09 09:59:56.790 [INFO][4672] k8s.go 413: Added Mac, interface name, and active container ID to endpoint ContainerID="de75578717a30796d2c89b5383300e1263c67d3cf095aa4b007c71ab770c60aa" Namespace="kube-system" Pod="coredns-787d4945fb-z55w7" WorkloadEndpoint="ci--3510.3.2--a--d10cdd880c-k8s-coredns--787d4945fb--z55w7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.2--a--d10cdd880c-k8s-coredns--787d4945fb--z55w7-eth0", GenerateName:"coredns-787d4945fb-", Namespace:"kube-system", SelfLink:"", UID:"ba7e5bc4-13aa-4925-8a28-9dfe806f0e3b", ResourceVersion:"887", Generation:0, CreationTimestamp:time.Date(2024, time.February, 9, 9, 58, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"787d4945fb", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.2-a-d10cdd880c", ContainerID:"de75578717a30796d2c89b5383300e1263c67d3cf095aa4b007c71ab770c60aa", Pod:"coredns-787d4945fb-z55w7", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.73.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali751b1c81fba", MAC:"f6:c3:71:c6:35:5b", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 9 09:59:56.799005 env[1448]: 2024-02-09 09:59:56.797 [INFO][4672] k8s.go 491: Wrote updated endpoint to datastore ContainerID="de75578717a30796d2c89b5383300e1263c67d3cf095aa4b007c71ab770c60aa" Namespace="kube-system" Pod="coredns-787d4945fb-z55w7" WorkloadEndpoint="ci--3510.3.2--a--d10cdd880c-k8s-coredns--787d4945fb--z55w7-eth0" Feb 9 09:59:56.816000 audit[4703]: NETFILTER_CFG table=filter:128 family=2 entries=30 op=nft_register_chain pid=4703 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Feb 9 09:59:56.816000 audit[4703]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=16712 a0=3 a1=fffffdb2a940 a2=0 a3=ffffb9ac0fa8 items=0 ppid=4307 pid=4703 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:59:56.816000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Feb 9 09:59:56.820296 env[1448]: time="2024-02-09T09:59:56.820236295Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 09:59:56.820466 env[1448]: time="2024-02-09T09:59:56.820277218Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 09:59:56.820466 env[1448]: time="2024-02-09T09:59:56.820287859Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 09:59:56.820568 env[1448]: time="2024-02-09T09:59:56.820510036Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/de75578717a30796d2c89b5383300e1263c67d3cf095aa4b007c71ab770c60aa pid=4711 runtime=io.containerd.runc.v2 Feb 9 09:59:56.884426 kubelet[2631]: I0209 09:59:56.884378 2631 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-787d4945fb-cd67q" podStartSLOduration=86.884338521 pod.CreationTimestamp="2024-02-09 09:58:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 09:59:56.876605524 +0000 UTC m=+101.740789850" watchObservedRunningTime="2024-02-09 09:59:56.884338521 +0000 UTC m=+101.748522847" Feb 9 09:59:56.894479 env[1448]: time="2024-02-09T09:59:56.885353999Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-787d4945fb-z55w7,Uid:ba7e5bc4-13aa-4925-8a28-9dfe806f0e3b,Namespace:kube-system,Attempt:1,} returns sandbox id \"de75578717a30796d2c89b5383300e1263c67d3cf095aa4b007c71ab770c60aa\"" Feb 9 09:59:56.894479 env[1448]: time="2024-02-09T09:59:56.888180458Z" level=info msg="CreateContainer within sandbox \"de75578717a30796d2c89b5383300e1263c67d3cf095aa4b007c71ab770c60aa\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 9 09:59:56.927822 env[1448]: time="2024-02-09T09:59:56.927779873Z" level=info msg="CreateContainer within sandbox \"de75578717a30796d2c89b5383300e1263c67d3cf095aa4b007c71ab770c60aa\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"969366d52aefda6f14b269f20433516de56982c3b0967231e270a920d0de6032\"" Feb 9 09:59:56.928674 env[1448]: time="2024-02-09T09:59:56.928635739Z" level=info msg="StartContainer for \"969366d52aefda6f14b269f20433516de56982c3b0967231e270a920d0de6032\"" Feb 9 09:59:56.951000 audit[4796]: NETFILTER_CFG table=filter:129 family=2 entries=12 op=nft_register_rule pid=4796 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 09:59:56.951000 audit[4796]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=4028 a0=3 a1=ffffda66f060 a2=0 a3=ffffaf33f6c0 items=0 ppid=2831 pid=4796 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:59:56.951000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 09:59:56.952000 audit[4796]: NETFILTER_CFG table=nat:130 family=2 entries=30 op=nft_register_rule pid=4796 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 09:59:56.952000 audit[4796]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=8836 a0=3 a1=ffffda66f060 a2=0 a3=ffffaf33f6c0 items=0 ppid=2831 pid=4796 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:59:56.952000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 09:59:56.979967 env[1448]: time="2024-02-09T09:59:56.979918337Z" level=info msg="StartContainer for \"969366d52aefda6f14b269f20433516de56982c3b0967231e270a920d0de6032\" returns successfully" Feb 9 09:59:57.000000 audit[4834]: NETFILTER_CFG table=filter:131 family=2 entries=9 op=nft_register_rule pid=4834 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 09:59:57.000000 audit[4834]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=1916 a0=3 a1=ffffc90bc510 a2=0 a3=ffff93d726c0 items=0 ppid=2831 pid=4834 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:59:57.000000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 09:59:57.001000 audit[4834]: NETFILTER_CFG table=nat:132 family=2 entries=51 op=nft_register_chain pid=4834 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 09:59:57.001000 audit[4834]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=19324 a0=3 a1=ffffc90bc510 a2=0 a3=ffff93d726c0 items=0 ppid=2831 pid=4834 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:59:57.001000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 09:59:57.540430 env[1448]: time="2024-02-09T09:59:57.540392624Z" level=info msg="StopPodSandbox for \"f952357642f85bdc2d65969e3eb0888ccda29b1685a587266a7028b8e593a917\"" Feb 9 09:59:57.620875 env[1448]: 2024-02-09 09:59:57.586 [INFO][4850] k8s.go 578: Cleaning up netns ContainerID="f952357642f85bdc2d65969e3eb0888ccda29b1685a587266a7028b8e593a917" Feb 9 09:59:57.620875 env[1448]: 2024-02-09 09:59:57.586 [INFO][4850] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="f952357642f85bdc2d65969e3eb0888ccda29b1685a587266a7028b8e593a917" iface="eth0" netns="/var/run/netns/cni-21686b4c-5022-20aa-3717-97bda1f058da" Feb 9 09:59:57.620875 env[1448]: 2024-02-09 09:59:57.586 [INFO][4850] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="f952357642f85bdc2d65969e3eb0888ccda29b1685a587266a7028b8e593a917" iface="eth0" netns="/var/run/netns/cni-21686b4c-5022-20aa-3717-97bda1f058da" Feb 9 09:59:57.620875 env[1448]: 2024-02-09 09:59:57.586 [INFO][4850] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="f952357642f85bdc2d65969e3eb0888ccda29b1685a587266a7028b8e593a917" iface="eth0" netns="/var/run/netns/cni-21686b4c-5022-20aa-3717-97bda1f058da" Feb 9 09:59:57.620875 env[1448]: 2024-02-09 09:59:57.586 [INFO][4850] k8s.go 585: Releasing IP address(es) ContainerID="f952357642f85bdc2d65969e3eb0888ccda29b1685a587266a7028b8e593a917" Feb 9 09:59:57.620875 env[1448]: 2024-02-09 09:59:57.586 [INFO][4850] utils.go 188: Calico CNI releasing IP address ContainerID="f952357642f85bdc2d65969e3eb0888ccda29b1685a587266a7028b8e593a917" Feb 9 09:59:57.620875 env[1448]: 2024-02-09 09:59:57.607 [INFO][4857] ipam_plugin.go 415: Releasing address using handleID ContainerID="f952357642f85bdc2d65969e3eb0888ccda29b1685a587266a7028b8e593a917" HandleID="k8s-pod-network.f952357642f85bdc2d65969e3eb0888ccda29b1685a587266a7028b8e593a917" Workload="ci--3510.3.2--a--d10cdd880c-k8s-calico--kube--controllers--66ffdb5668--zhk7s-eth0" Feb 9 09:59:57.620875 env[1448]: 2024-02-09 09:59:57.607 [INFO][4857] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 9 09:59:57.620875 env[1448]: 2024-02-09 09:59:57.607 [INFO][4857] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 9 09:59:57.620875 env[1448]: 2024-02-09 09:59:57.616 [WARNING][4857] ipam_plugin.go 432: Asked to release address but it doesn't exist. Ignoring ContainerID="f952357642f85bdc2d65969e3eb0888ccda29b1685a587266a7028b8e593a917" HandleID="k8s-pod-network.f952357642f85bdc2d65969e3eb0888ccda29b1685a587266a7028b8e593a917" Workload="ci--3510.3.2--a--d10cdd880c-k8s-calico--kube--controllers--66ffdb5668--zhk7s-eth0" Feb 9 09:59:57.620875 env[1448]: 2024-02-09 09:59:57.616 [INFO][4857] ipam_plugin.go 443: Releasing address using workloadID ContainerID="f952357642f85bdc2d65969e3eb0888ccda29b1685a587266a7028b8e593a917" HandleID="k8s-pod-network.f952357642f85bdc2d65969e3eb0888ccda29b1685a587266a7028b8e593a917" Workload="ci--3510.3.2--a--d10cdd880c-k8s-calico--kube--controllers--66ffdb5668--zhk7s-eth0" Feb 9 09:59:57.620875 env[1448]: 2024-02-09 09:59:57.618 [INFO][4857] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 9 09:59:57.620875 env[1448]: 2024-02-09 09:59:57.619 [INFO][4850] k8s.go 591: Teardown processing complete. ContainerID="f952357642f85bdc2d65969e3eb0888ccda29b1685a587266a7028b8e593a917" Feb 9 09:59:57.621676 env[1448]: time="2024-02-09T09:59:57.621643095Z" level=info msg="TearDown network for sandbox \"f952357642f85bdc2d65969e3eb0888ccda29b1685a587266a7028b8e593a917\" successfully" Feb 9 09:59:57.621754 env[1448]: time="2024-02-09T09:59:57.621738143Z" level=info msg="StopPodSandbox for \"f952357642f85bdc2d65969e3eb0888ccda29b1685a587266a7028b8e593a917\" returns successfully" Feb 9 09:59:57.622520 env[1448]: time="2024-02-09T09:59:57.622491480Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-66ffdb5668-zhk7s,Uid:089a4bc9-3ee0-4ac6-b6dd-cc5465ed5f4b,Namespace:calico-system,Attempt:1,}" Feb 9 09:59:57.640096 systemd[1]: run-containerd-runc-k8s.io-de75578717a30796d2c89b5383300e1263c67d3cf095aa4b007c71ab770c60aa-runc.2NtJke.mount: Deactivated successfully. Feb 9 09:59:57.640257 systemd[1]: run-netns-cni\x2d21686b4c\x2d5022\x2d20aa\x2d3717\x2d97bda1f058da.mount: Deactivated successfully. Feb 9 09:59:57.784441 systemd-networkd[1590]: cali4d0d1b78371: Link UP Feb 9 09:59:57.791351 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Feb 9 09:59:57.799320 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali4d0d1b78371: link becomes ready Feb 9 09:59:57.800087 systemd-networkd[1590]: cali4d0d1b78371: Gained carrier Feb 9 09:59:57.817249 env[1448]: 2024-02-09 09:59:57.704 [INFO][4863] plugin.go 327: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3510.3.2--a--d10cdd880c-k8s-calico--kube--controllers--66ffdb5668--zhk7s-eth0 calico-kube-controllers-66ffdb5668- calico-system 089a4bc9-3ee0-4ac6-b6dd-cc5465ed5f4b 903 0 2024-02-09 09:58:48 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:66ffdb5668 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-3510.3.2-a-d10cdd880c calico-kube-controllers-66ffdb5668-zhk7s eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali4d0d1b78371 [] []}} ContainerID="8c1816897da65e75c5d266ed093c06abc98a477be01a6342776fb608e8e3df60" Namespace="calico-system" Pod="calico-kube-controllers-66ffdb5668-zhk7s" WorkloadEndpoint="ci--3510.3.2--a--d10cdd880c-k8s-calico--kube--controllers--66ffdb5668--zhk7s-" Feb 9 09:59:57.817249 env[1448]: 2024-02-09 09:59:57.705 [INFO][4863] k8s.go 76: Extracted identifiers for CmdAddK8s ContainerID="8c1816897da65e75c5d266ed093c06abc98a477be01a6342776fb608e8e3df60" Namespace="calico-system" Pod="calico-kube-controllers-66ffdb5668-zhk7s" WorkloadEndpoint="ci--3510.3.2--a--d10cdd880c-k8s-calico--kube--controllers--66ffdb5668--zhk7s-eth0" Feb 9 09:59:57.817249 env[1448]: 2024-02-09 09:59:57.733 [INFO][4875] ipam_plugin.go 228: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8c1816897da65e75c5d266ed093c06abc98a477be01a6342776fb608e8e3df60" HandleID="k8s-pod-network.8c1816897da65e75c5d266ed093c06abc98a477be01a6342776fb608e8e3df60" Workload="ci--3510.3.2--a--d10cdd880c-k8s-calico--kube--controllers--66ffdb5668--zhk7s-eth0" Feb 9 09:59:57.817249 env[1448]: 2024-02-09 09:59:57.746 [INFO][4875] ipam_plugin.go 268: Auto assigning IP ContainerID="8c1816897da65e75c5d266ed093c06abc98a477be01a6342776fb608e8e3df60" HandleID="k8s-pod-network.8c1816897da65e75c5d266ed093c06abc98a477be01a6342776fb608e8e3df60" Workload="ci--3510.3.2--a--d10cdd880c-k8s-calico--kube--controllers--66ffdb5668--zhk7s-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400011a4b0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-3510.3.2-a-d10cdd880c", "pod":"calico-kube-controllers-66ffdb5668-zhk7s", "timestamp":"2024-02-09 09:59:57.733951693 +0000 UTC"}, Hostname:"ci-3510.3.2-a-d10cdd880c", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 9 09:59:57.817249 env[1448]: 2024-02-09 09:59:57.746 [INFO][4875] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 9 09:59:57.817249 env[1448]: 2024-02-09 09:59:57.746 [INFO][4875] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 9 09:59:57.817249 env[1448]: 2024-02-09 09:59:57.746 [INFO][4875] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3510.3.2-a-d10cdd880c' Feb 9 09:59:57.817249 env[1448]: 2024-02-09 09:59:57.747 [INFO][4875] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.8c1816897da65e75c5d266ed093c06abc98a477be01a6342776fb608e8e3df60" host="ci-3510.3.2-a-d10cdd880c" Feb 9 09:59:57.817249 env[1448]: 2024-02-09 09:59:57.750 [INFO][4875] ipam.go 372: Looking up existing affinities for host host="ci-3510.3.2-a-d10cdd880c" Feb 9 09:59:57.817249 env[1448]: 2024-02-09 09:59:57.753 [INFO][4875] ipam.go 489: Trying affinity for 192.168.73.64/26 host="ci-3510.3.2-a-d10cdd880c" Feb 9 09:59:57.817249 env[1448]: 2024-02-09 09:59:57.754 [INFO][4875] ipam.go 155: Attempting to load block cidr=192.168.73.64/26 host="ci-3510.3.2-a-d10cdd880c" Feb 9 09:59:57.817249 env[1448]: 2024-02-09 09:59:57.756 [INFO][4875] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.73.64/26 host="ci-3510.3.2-a-d10cdd880c" Feb 9 09:59:57.817249 env[1448]: 2024-02-09 09:59:57.756 [INFO][4875] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.73.64/26 handle="k8s-pod-network.8c1816897da65e75c5d266ed093c06abc98a477be01a6342776fb608e8e3df60" host="ci-3510.3.2-a-d10cdd880c" Feb 9 09:59:57.817249 env[1448]: 2024-02-09 09:59:57.758 [INFO][4875] ipam.go 1682: Creating new handle: k8s-pod-network.8c1816897da65e75c5d266ed093c06abc98a477be01a6342776fb608e8e3df60 Feb 9 09:59:57.817249 env[1448]: 2024-02-09 09:59:57.762 [INFO][4875] ipam.go 1203: Writing block in order to claim IPs block=192.168.73.64/26 handle="k8s-pod-network.8c1816897da65e75c5d266ed093c06abc98a477be01a6342776fb608e8e3df60" host="ci-3510.3.2-a-d10cdd880c" Feb 9 09:59:57.817249 env[1448]: 2024-02-09 09:59:57.766 [INFO][4875] ipam.go 1216: Successfully claimed IPs: [192.168.73.67/26] block=192.168.73.64/26 handle="k8s-pod-network.8c1816897da65e75c5d266ed093c06abc98a477be01a6342776fb608e8e3df60" host="ci-3510.3.2-a-d10cdd880c" Feb 9 09:59:57.817249 env[1448]: 2024-02-09 09:59:57.766 [INFO][4875] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.73.67/26] handle="k8s-pod-network.8c1816897da65e75c5d266ed093c06abc98a477be01a6342776fb608e8e3df60" host="ci-3510.3.2-a-d10cdd880c" Feb 9 09:59:57.817249 env[1448]: 2024-02-09 09:59:57.766 [INFO][4875] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 9 09:59:57.817249 env[1448]: 2024-02-09 09:59:57.766 [INFO][4875] ipam_plugin.go 286: Calico CNI IPAM assigned addresses IPv4=[192.168.73.67/26] IPv6=[] ContainerID="8c1816897da65e75c5d266ed093c06abc98a477be01a6342776fb608e8e3df60" HandleID="k8s-pod-network.8c1816897da65e75c5d266ed093c06abc98a477be01a6342776fb608e8e3df60" Workload="ci--3510.3.2--a--d10cdd880c-k8s-calico--kube--controllers--66ffdb5668--zhk7s-eth0" Feb 9 09:59:57.817849 env[1448]: 2024-02-09 09:59:57.770 [INFO][4863] k8s.go 385: Populated endpoint ContainerID="8c1816897da65e75c5d266ed093c06abc98a477be01a6342776fb608e8e3df60" Namespace="calico-system" Pod="calico-kube-controllers-66ffdb5668-zhk7s" WorkloadEndpoint="ci--3510.3.2--a--d10cdd880c-k8s-calico--kube--controllers--66ffdb5668--zhk7s-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.2--a--d10cdd880c-k8s-calico--kube--controllers--66ffdb5668--zhk7s-eth0", GenerateName:"calico-kube-controllers-66ffdb5668-", Namespace:"calico-system", SelfLink:"", UID:"089a4bc9-3ee0-4ac6-b6dd-cc5465ed5f4b", ResourceVersion:"903", Generation:0, CreationTimestamp:time.Date(2024, time.February, 9, 9, 58, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"66ffdb5668", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.2-a-d10cdd880c", ContainerID:"", Pod:"calico-kube-controllers-66ffdb5668-zhk7s", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.73.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali4d0d1b78371", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 9 09:59:57.817849 env[1448]: 2024-02-09 09:59:57.770 [INFO][4863] k8s.go 386: Calico CNI using IPs: [192.168.73.67/32] ContainerID="8c1816897da65e75c5d266ed093c06abc98a477be01a6342776fb608e8e3df60" Namespace="calico-system" Pod="calico-kube-controllers-66ffdb5668-zhk7s" WorkloadEndpoint="ci--3510.3.2--a--d10cdd880c-k8s-calico--kube--controllers--66ffdb5668--zhk7s-eth0" Feb 9 09:59:57.817849 env[1448]: 2024-02-09 09:59:57.771 [INFO][4863] dataplane_linux.go 68: Setting the host side veth name to cali4d0d1b78371 ContainerID="8c1816897da65e75c5d266ed093c06abc98a477be01a6342776fb608e8e3df60" Namespace="calico-system" Pod="calico-kube-controllers-66ffdb5668-zhk7s" WorkloadEndpoint="ci--3510.3.2--a--d10cdd880c-k8s-calico--kube--controllers--66ffdb5668--zhk7s-eth0" Feb 9 09:59:57.817849 env[1448]: 2024-02-09 09:59:57.800 [INFO][4863] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="8c1816897da65e75c5d266ed093c06abc98a477be01a6342776fb608e8e3df60" Namespace="calico-system" Pod="calico-kube-controllers-66ffdb5668-zhk7s" WorkloadEndpoint="ci--3510.3.2--a--d10cdd880c-k8s-calico--kube--controllers--66ffdb5668--zhk7s-eth0" Feb 9 09:59:57.817849 env[1448]: 2024-02-09 09:59:57.801 [INFO][4863] k8s.go 413: Added Mac, interface name, and active container ID to endpoint ContainerID="8c1816897da65e75c5d266ed093c06abc98a477be01a6342776fb608e8e3df60" Namespace="calico-system" Pod="calico-kube-controllers-66ffdb5668-zhk7s" WorkloadEndpoint="ci--3510.3.2--a--d10cdd880c-k8s-calico--kube--controllers--66ffdb5668--zhk7s-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.2--a--d10cdd880c-k8s-calico--kube--controllers--66ffdb5668--zhk7s-eth0", GenerateName:"calico-kube-controllers-66ffdb5668-", Namespace:"calico-system", SelfLink:"", UID:"089a4bc9-3ee0-4ac6-b6dd-cc5465ed5f4b", ResourceVersion:"903", Generation:0, CreationTimestamp:time.Date(2024, time.February, 9, 9, 58, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"66ffdb5668", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.2-a-d10cdd880c", ContainerID:"8c1816897da65e75c5d266ed093c06abc98a477be01a6342776fb608e8e3df60", Pod:"calico-kube-controllers-66ffdb5668-zhk7s", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.73.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali4d0d1b78371", MAC:"d6:6a:2f:a9:4e:3e", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 9 09:59:57.817849 env[1448]: 2024-02-09 09:59:57.815 [INFO][4863] k8s.go 491: Wrote updated endpoint to datastore ContainerID="8c1816897da65e75c5d266ed093c06abc98a477be01a6342776fb608e8e3df60" Namespace="calico-system" Pod="calico-kube-controllers-66ffdb5668-zhk7s" WorkloadEndpoint="ci--3510.3.2--a--d10cdd880c-k8s-calico--kube--controllers--66ffdb5668--zhk7s-eth0" Feb 9 09:59:57.836732 env[1448]: time="2024-02-09T09:59:57.836668320Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 09:59:57.836911 env[1448]: time="2024-02-09T09:59:57.836888337Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 09:59:57.837015 env[1448]: time="2024-02-09T09:59:57.836993585Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 09:59:57.837257 env[1448]: time="2024-02-09T09:59:57.837228483Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/8c1816897da65e75c5d266ed093c06abc98a477be01a6342776fb608e8e3df60 pid=4903 runtime=io.containerd.runc.v2 Feb 9 09:59:57.842000 audit[4908]: NETFILTER_CFG table=filter:133 family=2 entries=50 op=nft_register_chain pid=4908 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Feb 9 09:59:57.842000 audit[4908]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=25136 a0=3 a1=ffffd0d703b0 a2=0 a3=ffff87ee7fa8 items=0 ppid=4307 pid=4908 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:59:57.842000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Feb 9 09:59:57.881323 kubelet[2631]: I0209 09:59:57.881271 2631 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-787d4945fb-z55w7" podStartSLOduration=87.881237756 pod.CreationTimestamp="2024-02-09 09:58:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 09:59:57.879762524 +0000 UTC m=+102.743946850" watchObservedRunningTime="2024-02-09 09:59:57.881237756 +0000 UTC m=+102.745422082" Feb 9 09:59:57.882460 systemd-networkd[1590]: cali73f928224c1: Gained IPv6LL Feb 9 09:59:57.886436 systemd-networkd[1590]: cali751b1c81fba: Gained IPv6LL Feb 9 09:59:57.928947 env[1448]: time="2024-02-09T09:59:57.928908029Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-66ffdb5668-zhk7s,Uid:089a4bc9-3ee0-4ac6-b6dd-cc5465ed5f4b,Namespace:calico-system,Attempt:1,} returns sandbox id \"8c1816897da65e75c5d266ed093c06abc98a477be01a6342776fb608e8e3df60\"" Feb 9 09:59:57.932794 env[1448]: time="2024-02-09T09:59:57.932746801Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.27.0\"" Feb 9 09:59:57.945000 audit[4964]: NETFILTER_CFG table=filter:134 family=2 entries=6 op=nft_register_rule pid=4964 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 09:59:57.945000 audit[4964]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=1916 a0=3 a1=ffffc2b4d520 a2=0 a3=ffff8ccda6c0 items=0 ppid=2831 pid=4964 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:59:57.945000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 09:59:57.947000 audit[4964]: NETFILTER_CFG table=nat:135 family=2 entries=60 op=nft_register_rule pid=4964 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 09:59:57.947000 audit[4964]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=19324 a0=3 a1=ffffc2b4d520 a2=0 a3=ffff8ccda6c0 items=0 ppid=2831 pid=4964 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:59:57.947000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 09:59:58.980000 audit[4990]: NETFILTER_CFG table=filter:136 family=2 entries=6 op=nft_register_rule pid=4990 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 09:59:58.987081 kernel: kauditd_printk_skb: 115 callbacks suppressed Feb 9 09:59:58.987140 kernel: audit: type=1325 audit(1707472798.980:325): table=filter:136 family=2 entries=6 op=nft_register_rule pid=4990 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 09:59:58.980000 audit[4990]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=1916 a0=3 a1=ffffd1958900 a2=0 a3=ffff881076c0 items=0 ppid=2831 pid=4990 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:59:59.035686 kernel: audit: type=1300 audit(1707472798.980:325): arch=c00000b7 syscall=211 success=yes exit=1916 a0=3 a1=ffffd1958900 a2=0 a3=ffff881076c0 items=0 ppid=2831 pid=4990 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:59:58.980000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 09:59:59.051892 kernel: audit: type=1327 audit(1707472798.980:325): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 09:59:59.055000 audit[4990]: NETFILTER_CFG table=nat:137 family=2 entries=72 op=nft_register_chain pid=4990 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 09:59:59.055000 audit[4990]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=24988 a0=3 a1=ffffd1958900 a2=0 a3=ffff881076c0 items=0 ppid=2831 pid=4990 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:59:59.106556 kernel: audit: type=1325 audit(1707472799.055:326): table=nat:137 family=2 entries=72 op=nft_register_chain pid=4990 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 09:59:59.106749 kernel: audit: type=1300 audit(1707472799.055:326): arch=c00000b7 syscall=211 success=yes exit=24988 a0=3 a1=ffffd1958900 a2=0 a3=ffff881076c0 items=0 ppid=2831 pid=4990 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:59:59.055000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 09:59:59.126997 kernel: audit: type=1327 audit(1707472799.055:326): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 09:59:59.481500 systemd-networkd[1590]: cali4d0d1b78371: Gained IPv6LL Feb 9 09:59:59.577508 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2704739935.mount: Deactivated successfully. Feb 9 10:00:00.330738 env[1448]: time="2024-02-09T10:00:00.330689239Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/kube-controllers:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 10:00:00.336740 env[1448]: time="2024-02-09T10:00:00.336704321Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:094645649618376e48b5ec13a94a164d53dbdf819b7ab644f080b751f24560c8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 10:00:00.341980 env[1448]: time="2024-02-09T10:00:00.341943625Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/kube-controllers:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 10:00:00.346799 env[1448]: time="2024-02-09T10:00:00.346764419Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/kube-controllers@sha256:e264ab1fb2f1ae90dd1d84e226d11d2eb4350e74ac27de4c65f29f5aadba5bb1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 10:00:00.347043 env[1448]: time="2024-02-09T10:00:00.347014878Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.27.0\" returns image reference \"sha256:094645649618376e48b5ec13a94a164d53dbdf819b7ab644f080b751f24560c8\"" Feb 9 10:00:00.363649 env[1448]: time="2024-02-09T10:00:00.363604255Z" level=info msg="CreateContainer within sandbox \"8c1816897da65e75c5d266ed093c06abc98a477be01a6342776fb608e8e3df60\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Feb 9 10:00:00.407117 env[1448]: time="2024-02-09T10:00:00.407068765Z" level=info msg="CreateContainer within sandbox \"8c1816897da65e75c5d266ed093c06abc98a477be01a6342776fb608e8e3df60\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"f5160b77947cadfadae2b25c5878a61febea174b54ea0da60bd689352dee80a8\"" Feb 9 10:00:00.408558 env[1448]: time="2024-02-09T10:00:00.408525952Z" level=info msg="StartContainer for \"f5160b77947cadfadae2b25c5878a61febea174b54ea0da60bd689352dee80a8\"" Feb 9 10:00:00.468688 env[1448]: time="2024-02-09T10:00:00.468639684Z" level=info msg="StartContainer for \"f5160b77947cadfadae2b25c5878a61febea174b54ea0da60bd689352dee80a8\" returns successfully" Feb 9 10:00:00.539688 env[1448]: time="2024-02-09T10:00:00.539578890Z" level=info msg="StopPodSandbox for \"f2085bca38ad5e1d2c534ce6eafaaca4bdc2cd8a2819f432643b3908fbc05dbc\"" Feb 9 10:00:00.574294 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1498942471.mount: Deactivated successfully. Feb 9 10:00:00.660772 env[1448]: 2024-02-09 10:00:00.607 [INFO][5046] k8s.go 578: Cleaning up netns ContainerID="f2085bca38ad5e1d2c534ce6eafaaca4bdc2cd8a2819f432643b3908fbc05dbc" Feb 9 10:00:00.660772 env[1448]: 2024-02-09 10:00:00.607 [INFO][5046] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="f2085bca38ad5e1d2c534ce6eafaaca4bdc2cd8a2819f432643b3908fbc05dbc" iface="eth0" netns="/var/run/netns/cni-1df0c73b-79b3-a4b9-5459-c2cf72f52551" Feb 9 10:00:00.660772 env[1448]: 2024-02-09 10:00:00.608 [INFO][5046] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="f2085bca38ad5e1d2c534ce6eafaaca4bdc2cd8a2819f432643b3908fbc05dbc" iface="eth0" netns="/var/run/netns/cni-1df0c73b-79b3-a4b9-5459-c2cf72f52551" Feb 9 10:00:00.660772 env[1448]: 2024-02-09 10:00:00.608 [INFO][5046] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="f2085bca38ad5e1d2c534ce6eafaaca4bdc2cd8a2819f432643b3908fbc05dbc" iface="eth0" netns="/var/run/netns/cni-1df0c73b-79b3-a4b9-5459-c2cf72f52551" Feb 9 10:00:00.660772 env[1448]: 2024-02-09 10:00:00.608 [INFO][5046] k8s.go 585: Releasing IP address(es) ContainerID="f2085bca38ad5e1d2c534ce6eafaaca4bdc2cd8a2819f432643b3908fbc05dbc" Feb 9 10:00:00.660772 env[1448]: 2024-02-09 10:00:00.608 [INFO][5046] utils.go 188: Calico CNI releasing IP address ContainerID="f2085bca38ad5e1d2c534ce6eafaaca4bdc2cd8a2819f432643b3908fbc05dbc" Feb 9 10:00:00.660772 env[1448]: 2024-02-09 10:00:00.640 [INFO][5052] ipam_plugin.go 415: Releasing address using handleID ContainerID="f2085bca38ad5e1d2c534ce6eafaaca4bdc2cd8a2819f432643b3908fbc05dbc" HandleID="k8s-pod-network.f2085bca38ad5e1d2c534ce6eafaaca4bdc2cd8a2819f432643b3908fbc05dbc" Workload="ci--3510.3.2--a--d10cdd880c-k8s-csi--node--driver--s2mw5-eth0" Feb 9 10:00:00.660772 env[1448]: 2024-02-09 10:00:00.640 [INFO][5052] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 9 10:00:00.660772 env[1448]: 2024-02-09 10:00:00.640 [INFO][5052] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 9 10:00:00.660772 env[1448]: 2024-02-09 10:00:00.651 [WARNING][5052] ipam_plugin.go 432: Asked to release address but it doesn't exist. Ignoring ContainerID="f2085bca38ad5e1d2c534ce6eafaaca4bdc2cd8a2819f432643b3908fbc05dbc" HandleID="k8s-pod-network.f2085bca38ad5e1d2c534ce6eafaaca4bdc2cd8a2819f432643b3908fbc05dbc" Workload="ci--3510.3.2--a--d10cdd880c-k8s-csi--node--driver--s2mw5-eth0" Feb 9 10:00:00.660772 env[1448]: 2024-02-09 10:00:00.651 [INFO][5052] ipam_plugin.go 443: Releasing address using workloadID ContainerID="f2085bca38ad5e1d2c534ce6eafaaca4bdc2cd8a2819f432643b3908fbc05dbc" HandleID="k8s-pod-network.f2085bca38ad5e1d2c534ce6eafaaca4bdc2cd8a2819f432643b3908fbc05dbc" Workload="ci--3510.3.2--a--d10cdd880c-k8s-csi--node--driver--s2mw5-eth0" Feb 9 10:00:00.660772 env[1448]: 2024-02-09 10:00:00.652 [INFO][5052] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 9 10:00:00.660772 env[1448]: 2024-02-09 10:00:00.658 [INFO][5046] k8s.go 591: Teardown processing complete. ContainerID="f2085bca38ad5e1d2c534ce6eafaaca4bdc2cd8a2819f432643b3908fbc05dbc" Feb 9 10:00:00.663700 systemd[1]: run-netns-cni\x2d1df0c73b\x2d79b3\x2da4b9\x2d5459\x2dc2cf72f52551.mount: Deactivated successfully. Feb 9 10:00:00.664273 env[1448]: time="2024-02-09T10:00:00.664234959Z" level=info msg="TearDown network for sandbox \"f2085bca38ad5e1d2c534ce6eafaaca4bdc2cd8a2819f432643b3908fbc05dbc\" successfully" Feb 9 10:00:00.664415 env[1448]: time="2024-02-09T10:00:00.664397651Z" level=info msg="StopPodSandbox for \"f2085bca38ad5e1d2c534ce6eafaaca4bdc2cd8a2819f432643b3908fbc05dbc\" returns successfully" Feb 9 10:00:00.665351 env[1448]: time="2024-02-09T10:00:00.665284796Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-s2mw5,Uid:99a671b0-f7e8-4988-baf5-8e0d96bfea44,Namespace:calico-system,Attempt:1,}" Feb 9 10:00:00.841374 systemd-networkd[1590]: cali12fd53d41fe: Link UP Feb 9 10:00:00.856351 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Feb 9 10:00:00.865906 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali12fd53d41fe: link becomes ready Feb 9 10:00:00.866690 systemd-networkd[1590]: cali12fd53d41fe: Gained carrier Feb 9 10:00:00.883132 env[1448]: 2024-02-09 10:00:00.747 [INFO][5058] plugin.go 327: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3510.3.2--a--d10cdd880c-k8s-csi--node--driver--s2mw5-eth0 csi-node-driver- calico-system 99a671b0-f7e8-4988-baf5-8e0d96bfea44 929 0 2024-02-09 09:58:47 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:7c77f88967 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s ci-3510.3.2-a-d10cdd880c csi-node-driver-s2mw5 eth0 default [] [] [kns.calico-system ksa.calico-system.default] cali12fd53d41fe [] []}} ContainerID="10603c3b8f20f726ea29b339e47081c742fb713ad651cd90d9c511fa5c1ce549" Namespace="calico-system" Pod="csi-node-driver-s2mw5" WorkloadEndpoint="ci--3510.3.2--a--d10cdd880c-k8s-csi--node--driver--s2mw5-" Feb 9 10:00:00.883132 env[1448]: 2024-02-09 10:00:00.754 [INFO][5058] k8s.go 76: Extracted identifiers for CmdAddK8s ContainerID="10603c3b8f20f726ea29b339e47081c742fb713ad651cd90d9c511fa5c1ce549" Namespace="calico-system" Pod="csi-node-driver-s2mw5" WorkloadEndpoint="ci--3510.3.2--a--d10cdd880c-k8s-csi--node--driver--s2mw5-eth0" Feb 9 10:00:00.883132 env[1448]: 2024-02-09 10:00:00.797 [INFO][5069] ipam_plugin.go 228: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="10603c3b8f20f726ea29b339e47081c742fb713ad651cd90d9c511fa5c1ce549" HandleID="k8s-pod-network.10603c3b8f20f726ea29b339e47081c742fb713ad651cd90d9c511fa5c1ce549" Workload="ci--3510.3.2--a--d10cdd880c-k8s-csi--node--driver--s2mw5-eth0" Feb 9 10:00:00.883132 env[1448]: 2024-02-09 10:00:00.812 [INFO][5069] ipam_plugin.go 268: Auto assigning IP ContainerID="10603c3b8f20f726ea29b339e47081c742fb713ad651cd90d9c511fa5c1ce549" HandleID="k8s-pod-network.10603c3b8f20f726ea29b339e47081c742fb713ad651cd90d9c511fa5c1ce549" Workload="ci--3510.3.2--a--d10cdd880c-k8s-csi--node--driver--s2mw5-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002036f0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-3510.3.2-a-d10cdd880c", "pod":"csi-node-driver-s2mw5", "timestamp":"2024-02-09 10:00:00.797767359 +0000 UTC"}, Hostname:"ci-3510.3.2-a-d10cdd880c", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 9 10:00:00.883132 env[1448]: 2024-02-09 10:00:00.812 [INFO][5069] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 9 10:00:00.883132 env[1448]: 2024-02-09 10:00:00.812 [INFO][5069] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 9 10:00:00.883132 env[1448]: 2024-02-09 10:00:00.812 [INFO][5069] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3510.3.2-a-d10cdd880c' Feb 9 10:00:00.883132 env[1448]: 2024-02-09 10:00:00.814 [INFO][5069] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.10603c3b8f20f726ea29b339e47081c742fb713ad651cd90d9c511fa5c1ce549" host="ci-3510.3.2-a-d10cdd880c" Feb 9 10:00:00.883132 env[1448]: 2024-02-09 10:00:00.817 [INFO][5069] ipam.go 372: Looking up existing affinities for host host="ci-3510.3.2-a-d10cdd880c" Feb 9 10:00:00.883132 env[1448]: 2024-02-09 10:00:00.820 [INFO][5069] ipam.go 489: Trying affinity for 192.168.73.64/26 host="ci-3510.3.2-a-d10cdd880c" Feb 9 10:00:00.883132 env[1448]: 2024-02-09 10:00:00.822 [INFO][5069] ipam.go 155: Attempting to load block cidr=192.168.73.64/26 host="ci-3510.3.2-a-d10cdd880c" Feb 9 10:00:00.883132 env[1448]: 2024-02-09 10:00:00.824 [INFO][5069] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.73.64/26 host="ci-3510.3.2-a-d10cdd880c" Feb 9 10:00:00.883132 env[1448]: 2024-02-09 10:00:00.824 [INFO][5069] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.73.64/26 handle="k8s-pod-network.10603c3b8f20f726ea29b339e47081c742fb713ad651cd90d9c511fa5c1ce549" host="ci-3510.3.2-a-d10cdd880c" Feb 9 10:00:00.883132 env[1448]: 2024-02-09 10:00:00.825 [INFO][5069] ipam.go 1682: Creating new handle: k8s-pod-network.10603c3b8f20f726ea29b339e47081c742fb713ad651cd90d9c511fa5c1ce549 Feb 9 10:00:00.883132 env[1448]: 2024-02-09 10:00:00.828 [INFO][5069] ipam.go 1203: Writing block in order to claim IPs block=192.168.73.64/26 handle="k8s-pod-network.10603c3b8f20f726ea29b339e47081c742fb713ad651cd90d9c511fa5c1ce549" host="ci-3510.3.2-a-d10cdd880c" Feb 9 10:00:00.883132 env[1448]: 2024-02-09 10:00:00.835 [INFO][5069] ipam.go 1216: Successfully claimed IPs: [192.168.73.68/26] block=192.168.73.64/26 handle="k8s-pod-network.10603c3b8f20f726ea29b339e47081c742fb713ad651cd90d9c511fa5c1ce549" host="ci-3510.3.2-a-d10cdd880c" Feb 9 10:00:00.883132 env[1448]: 2024-02-09 10:00:00.835 [INFO][5069] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.73.68/26] handle="k8s-pod-network.10603c3b8f20f726ea29b339e47081c742fb713ad651cd90d9c511fa5c1ce549" host="ci-3510.3.2-a-d10cdd880c" Feb 9 10:00:00.883132 env[1448]: 2024-02-09 10:00:00.835 [INFO][5069] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 9 10:00:00.883132 env[1448]: 2024-02-09 10:00:00.835 [INFO][5069] ipam_plugin.go 286: Calico CNI IPAM assigned addresses IPv4=[192.168.73.68/26] IPv6=[] ContainerID="10603c3b8f20f726ea29b339e47081c742fb713ad651cd90d9c511fa5c1ce549" HandleID="k8s-pod-network.10603c3b8f20f726ea29b339e47081c742fb713ad651cd90d9c511fa5c1ce549" Workload="ci--3510.3.2--a--d10cdd880c-k8s-csi--node--driver--s2mw5-eth0" Feb 9 10:00:00.883729 env[1448]: 2024-02-09 10:00:00.839 [INFO][5058] k8s.go 385: Populated endpoint ContainerID="10603c3b8f20f726ea29b339e47081c742fb713ad651cd90d9c511fa5c1ce549" Namespace="calico-system" Pod="csi-node-driver-s2mw5" WorkloadEndpoint="ci--3510.3.2--a--d10cdd880c-k8s-csi--node--driver--s2mw5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.2--a--d10cdd880c-k8s-csi--node--driver--s2mw5-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"99a671b0-f7e8-4988-baf5-8e0d96bfea44", ResourceVersion:"929", Generation:0, CreationTimestamp:time.Date(2024, time.February, 9, 9, 58, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"7c77f88967", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.2-a-d10cdd880c", ContainerID:"", Pod:"csi-node-driver-s2mw5", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.73.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali12fd53d41fe", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 9 10:00:00.883729 env[1448]: 2024-02-09 10:00:00.839 [INFO][5058] k8s.go 386: Calico CNI using IPs: [192.168.73.68/32] ContainerID="10603c3b8f20f726ea29b339e47081c742fb713ad651cd90d9c511fa5c1ce549" Namespace="calico-system" Pod="csi-node-driver-s2mw5" WorkloadEndpoint="ci--3510.3.2--a--d10cdd880c-k8s-csi--node--driver--s2mw5-eth0" Feb 9 10:00:00.883729 env[1448]: 2024-02-09 10:00:00.839 [INFO][5058] dataplane_linux.go 68: Setting the host side veth name to cali12fd53d41fe ContainerID="10603c3b8f20f726ea29b339e47081c742fb713ad651cd90d9c511fa5c1ce549" Namespace="calico-system" Pod="csi-node-driver-s2mw5" WorkloadEndpoint="ci--3510.3.2--a--d10cdd880c-k8s-csi--node--driver--s2mw5-eth0" Feb 9 10:00:00.883729 env[1448]: 2024-02-09 10:00:00.867 [INFO][5058] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="10603c3b8f20f726ea29b339e47081c742fb713ad651cd90d9c511fa5c1ce549" Namespace="calico-system" Pod="csi-node-driver-s2mw5" WorkloadEndpoint="ci--3510.3.2--a--d10cdd880c-k8s-csi--node--driver--s2mw5-eth0" Feb 9 10:00:00.883729 env[1448]: 2024-02-09 10:00:00.867 [INFO][5058] k8s.go 413: Added Mac, interface name, and active container ID to endpoint ContainerID="10603c3b8f20f726ea29b339e47081c742fb713ad651cd90d9c511fa5c1ce549" Namespace="calico-system" Pod="csi-node-driver-s2mw5" WorkloadEndpoint="ci--3510.3.2--a--d10cdd880c-k8s-csi--node--driver--s2mw5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.2--a--d10cdd880c-k8s-csi--node--driver--s2mw5-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"99a671b0-f7e8-4988-baf5-8e0d96bfea44", ResourceVersion:"929", Generation:0, CreationTimestamp:time.Date(2024, time.February, 9, 9, 58, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"7c77f88967", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.2-a-d10cdd880c", ContainerID:"10603c3b8f20f726ea29b339e47081c742fb713ad651cd90d9c511fa5c1ce549", Pod:"csi-node-driver-s2mw5", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.73.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali12fd53d41fe", MAC:"fa:11:ef:86:e9:65", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 9 10:00:00.883729 env[1448]: 2024-02-09 10:00:00.880 [INFO][5058] k8s.go 491: Wrote updated endpoint to datastore ContainerID="10603c3b8f20f726ea29b339e47081c742fb713ad651cd90d9c511fa5c1ce549" Namespace="calico-system" Pod="csi-node-driver-s2mw5" WorkloadEndpoint="ci--3510.3.2--a--d10cdd880c-k8s-csi--node--driver--s2mw5-eth0" Feb 9 10:00:00.950027 env[1448]: time="2024-02-09T10:00:00.948899331Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 10:00:00.950027 env[1448]: time="2024-02-09T10:00:00.949012939Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 10:00:00.950027 env[1448]: time="2024-02-09T10:00:00.949041101Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 10:00:00.951139 env[1448]: time="2024-02-09T10:00:00.950967883Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/10603c3b8f20f726ea29b339e47081c742fb713ad651cd90d9c511fa5c1ce549 pid=5107 runtime=io.containerd.runc.v2 Feb 9 10:00:00.954000 audit[5106]: NETFILTER_CFG table=filter:138 family=2 entries=38 op=nft_register_chain pid=5106 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Feb 9 10:00:00.954000 audit[5106]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=19492 a0=3 a1=ffffc0a0eda0 a2=0 a3=ffffb7fe1fa8 items=0 ppid=4307 pid=5106 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 10:00:01.010538 kernel: audit: type=1325 audit(1707472800.954:327): table=filter:138 family=2 entries=38 op=nft_register_chain pid=5106 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Feb 9 10:00:01.010681 kernel: audit: type=1300 audit(1707472800.954:327): arch=c00000b7 syscall=211 success=yes exit=19492 a0=3 a1=ffffc0a0eda0 a2=0 a3=ffffb7fe1fa8 items=0 ppid=4307 pid=5106 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 10:00:00.954000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Feb 9 10:00:01.031285 kernel: audit: type=1327 audit(1707472800.954:327): proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Feb 9 10:00:01.070397 kubelet[2631]: I0209 10:00:01.070353 2631 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-66ffdb5668-zhk7s" podStartSLOduration=-9.223371963784515e+09 pod.CreationTimestamp="2024-02-09 09:58:48 +0000 UTC" firstStartedPulling="2024-02-09 09:59:57.930355819 +0000 UTC m=+102.794540145" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 10:00:00.909352949 +0000 UTC m=+105.773537275" watchObservedRunningTime="2024-02-09 10:00:01.070260255 +0000 UTC m=+105.934444581" Feb 9 10:00:01.097316 env[1448]: time="2024-02-09T10:00:01.097242051Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-s2mw5,Uid:99a671b0-f7e8-4988-baf5-8e0d96bfea44,Namespace:calico-system,Attempt:1,} returns sandbox id \"10603c3b8f20f726ea29b339e47081c742fb713ad651cd90d9c511fa5c1ce549\"" Feb 9 10:00:01.098819 env[1448]: time="2024-02-09T10:00:01.098788763Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.27.0\"" Feb 9 10:00:01.574495 systemd[1]: run-containerd-runc-k8s.io-f5160b77947cadfadae2b25c5878a61febea174b54ea0da60bd689352dee80a8-runc.QyWZv4.mount: Deactivated successfully. Feb 9 10:00:02.553504 systemd-networkd[1590]: cali12fd53d41fe: Gained IPv6LL Feb 9 10:00:06.030218 systemd[1]: run-containerd-runc-k8s.io-91d63ce072a914b20a5508b3e18aa2e28df7768cd29968d1054fb74323b1398c-runc.7ksZcX.mount: Deactivated successfully. Feb 9 10:00:08.483638 env[1448]: time="2024-02-09T10:00:08.483582398Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/csi:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 10:00:08.495535 env[1448]: time="2024-02-09T10:00:08.495484630Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:4b71e7439e0eba34a97844591560a009f37e8e6c17a386a34d416c1cc872dee8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 10:00:08.501295 env[1448]: time="2024-02-09T10:00:08.501258334Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/csi:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 10:00:08.507342 env[1448]: time="2024-02-09T10:00:08.507285335Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/csi@sha256:2b9021393c17e87ba8a3c89f5b3719941812f4e4751caa0b71eb2233bff48738,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 10:00:08.508579 env[1448]: time="2024-02-09T10:00:08.508544739Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.27.0\" returns image reference \"sha256:4b71e7439e0eba34a97844591560a009f37e8e6c17a386a34d416c1cc872dee8\"" Feb 9 10:00:08.512151 env[1448]: time="2024-02-09T10:00:08.512121217Z" level=info msg="CreateContainer within sandbox \"10603c3b8f20f726ea29b339e47081c742fb713ad651cd90d9c511fa5c1ce549\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Feb 9 10:00:08.546807 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3402282792.mount: Deactivated successfully. Feb 9 10:00:08.566905 env[1448]: time="2024-02-09T10:00:08.566831696Z" level=info msg="CreateContainer within sandbox \"10603c3b8f20f726ea29b339e47081c742fb713ad651cd90d9c511fa5c1ce549\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"a3a42ec27803d0fbe78ff5e9184c90c6ee4a7d5537afdf9995b2dfedec8216ab\"" Feb 9 10:00:08.569887 env[1448]: time="2024-02-09T10:00:08.569788453Z" level=info msg="StartContainer for \"a3a42ec27803d0fbe78ff5e9184c90c6ee4a7d5537afdf9995b2dfedec8216ab\"" Feb 9 10:00:08.641310 env[1448]: time="2024-02-09T10:00:08.641229005Z" level=info msg="StartContainer for \"a3a42ec27803d0fbe78ff5e9184c90c6ee4a7d5537afdf9995b2dfedec8216ab\" returns successfully" Feb 9 10:00:08.643849 env[1448]: time="2024-02-09T10:00:08.643794856Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.27.0\"" Feb 9 10:00:11.243716 kubelet[2631]: I0209 10:00:11.243676 2631 topology_manager.go:210] "Topology Admit Handler" Feb 9 10:00:11.258576 kubelet[2631]: I0209 10:00:11.258542 2631 topology_manager.go:210] "Topology Admit Handler" Feb 9 10:00:11.299091 kubelet[2631]: I0209 10:00:11.299059 2631 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/19203552-ba5a-46c5-a2d0-9a2f56ce43b3-calico-apiserver-certs\") pod \"calico-apiserver-7f6d76d589-52s9m\" (UID: \"19203552-ba5a-46c5-a2d0-9a2f56ce43b3\") " pod="calico-apiserver/calico-apiserver-7f6d76d589-52s9m" Feb 9 10:00:11.299407 kubelet[2631]: I0209 10:00:11.299368 2631 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wk98l\" (UniqueName: \"kubernetes.io/projected/19203552-ba5a-46c5-a2d0-9a2f56ce43b3-kube-api-access-wk98l\") pod \"calico-apiserver-7f6d76d589-52s9m\" (UID: \"19203552-ba5a-46c5-a2d0-9a2f56ce43b3\") " pod="calico-apiserver/calico-apiserver-7f6d76d589-52s9m" Feb 9 10:00:11.328000 audit[5246]: NETFILTER_CFG table=filter:139 family=2 entries=7 op=nft_register_rule pid=5246 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 10:00:11.328000 audit[5246]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2620 a0=3 a1=ffffd91ca9f0 a2=0 a3=ffff92b8f6c0 items=0 ppid=2831 pid=5246 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 10:00:11.376228 kernel: audit: type=1325 audit(1707472811.328:328): table=filter:139 family=2 entries=7 op=nft_register_rule pid=5246 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 10:00:11.376343 kernel: audit: type=1300 audit(1707472811.328:328): arch=c00000b7 syscall=211 success=yes exit=2620 a0=3 a1=ffffd91ca9f0 a2=0 a3=ffff92b8f6c0 items=0 ppid=2831 pid=5246 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 10:00:11.328000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 10:00:11.392152 kernel: audit: type=1327 audit(1707472811.328:328): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 10:00:11.330000 audit[5246]: NETFILTER_CFG table=nat:140 family=2 entries=78 op=nft_register_rule pid=5246 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 10:00:11.399643 kubelet[2631]: I0209 10:00:11.399609 2631 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/5760aa1d-ce11-4869-ae2b-5efc8952120d-calico-apiserver-certs\") pod \"calico-apiserver-7f6d76d589-f54rv\" (UID: \"5760aa1d-ce11-4869-ae2b-5efc8952120d\") " pod="calico-apiserver/calico-apiserver-7f6d76d589-f54rv" Feb 9 10:00:11.399804 kubelet[2631]: I0209 10:00:11.399792 2631 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ljmvw\" (UniqueName: \"kubernetes.io/projected/5760aa1d-ce11-4869-ae2b-5efc8952120d-kube-api-access-ljmvw\") pod \"calico-apiserver-7f6d76d589-f54rv\" (UID: \"5760aa1d-ce11-4869-ae2b-5efc8952120d\") " pod="calico-apiserver/calico-apiserver-7f6d76d589-f54rv" Feb 9 10:00:11.400007 kubelet[2631]: E0209 10:00:11.399991 2631 secret.go:194] Couldn't get secret calico-apiserver/calico-apiserver-certs: secret "calico-apiserver-certs" not found Feb 9 10:00:11.400167 kubelet[2631]: E0209 10:00:11.400154 2631 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/19203552-ba5a-46c5-a2d0-9a2f56ce43b3-calico-apiserver-certs podName:19203552-ba5a-46c5-a2d0-9a2f56ce43b3 nodeName:}" failed. No retries permitted until 2024-02-09 10:00:11.900120368 +0000 UTC m=+116.764304694 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "calico-apiserver-certs" (UniqueName: "kubernetes.io/secret/19203552-ba5a-46c5-a2d0-9a2f56ce43b3-calico-apiserver-certs") pod "calico-apiserver-7f6d76d589-52s9m" (UID: "19203552-ba5a-46c5-a2d0-9a2f56ce43b3") : secret "calico-apiserver-certs" not found Feb 9 10:00:11.409007 kernel: audit: type=1325 audit(1707472811.330:329): table=nat:140 family=2 entries=78 op=nft_register_rule pid=5246 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 10:00:11.330000 audit[5246]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=24988 a0=3 a1=ffffd91ca9f0 a2=0 a3=ffff92b8f6c0 items=0 ppid=2831 pid=5246 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 10:00:11.330000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 10:00:11.462491 kernel: audit: type=1300 audit(1707472811.330:329): arch=c00000b7 syscall=211 success=yes exit=24988 a0=3 a1=ffffd91ca9f0 a2=0 a3=ffff92b8f6c0 items=0 ppid=2831 pid=5246 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 10:00:11.462625 kernel: audit: type=1327 audit(1707472811.330:329): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 10:00:11.466000 audit[5273]: NETFILTER_CFG table=filter:141 family=2 entries=8 op=nft_register_rule pid=5273 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 10:00:11.466000 audit[5273]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2620 a0=3 a1=ffffcefa09d0 a2=0 a3=ffffa161e6c0 items=0 ppid=2831 pid=5273 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 10:00:11.514446 kernel: audit: type=1325 audit(1707472811.466:330): table=filter:141 family=2 entries=8 op=nft_register_rule pid=5273 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 10:00:11.514582 kernel: audit: type=1300 audit(1707472811.466:330): arch=c00000b7 syscall=211 success=yes exit=2620 a0=3 a1=ffffcefa09d0 a2=0 a3=ffffa161e6c0 items=0 ppid=2831 pid=5273 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 10:00:11.466000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 10:00:11.515441 kubelet[2631]: E0209 10:00:11.515415 2631 secret.go:194] Couldn't get secret calico-apiserver/calico-apiserver-certs: secret "calico-apiserver-certs" not found Feb 9 10:00:11.515627 kubelet[2631]: E0209 10:00:11.515613 2631 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5760aa1d-ce11-4869-ae2b-5efc8952120d-calico-apiserver-certs podName:5760aa1d-ce11-4869-ae2b-5efc8952120d nodeName:}" failed. No retries permitted until 2024-02-09 10:00:12.015595857 +0000 UTC m=+116.879780183 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "calico-apiserver-certs" (UniqueName: "kubernetes.io/secret/5760aa1d-ce11-4869-ae2b-5efc8952120d-calico-apiserver-certs") pod "calico-apiserver-7f6d76d589-f54rv" (UID: "5760aa1d-ce11-4869-ae2b-5efc8952120d") : secret "calico-apiserver-certs" not found Feb 9 10:00:11.529810 kernel: audit: type=1327 audit(1707472811.466:330): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 10:00:11.475000 audit[5273]: NETFILTER_CFG table=nat:142 family=2 entries=78 op=nft_register_rule pid=5273 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 10:00:11.548273 kernel: audit: type=1325 audit(1707472811.475:331): table=nat:142 family=2 entries=78 op=nft_register_rule pid=5273 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 10:00:11.475000 audit[5273]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=24988 a0=3 a1=ffffcefa09d0 a2=0 a3=ffffa161e6c0 items=0 ppid=2831 pid=5273 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 10:00:11.475000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 10:00:12.147979 env[1448]: time="2024-02-09T10:00:12.147934636Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7f6d76d589-52s9m,Uid:19203552-ba5a-46c5-a2d0-9a2f56ce43b3,Namespace:calico-apiserver,Attempt:0,}" Feb 9 10:00:12.162574 env[1448]: time="2024-02-09T10:00:12.162222462Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7f6d76d589-f54rv,Uid:5760aa1d-ce11-4869-ae2b-5efc8952120d,Namespace:calico-apiserver,Attempt:0,}" Feb 9 10:00:12.357784 systemd-networkd[1590]: cali79afcd280dd: Link UP Feb 9 10:00:12.368433 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Feb 9 10:00:12.368574 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali79afcd280dd: link becomes ready Feb 9 10:00:12.369367 systemd-networkd[1590]: cali79afcd280dd: Gained carrier Feb 9 10:00:12.402440 env[1448]: 2024-02-09 10:00:12.248 [INFO][5277] plugin.go 327: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3510.3.2--a--d10cdd880c-k8s-calico--apiserver--7f6d76d589--52s9m-eth0 calico-apiserver-7f6d76d589- calico-apiserver 19203552-ba5a-46c5-a2d0-9a2f56ce43b3 1008 0 2024-02-09 10:00:11 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7f6d76d589 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-3510.3.2-a-d10cdd880c calico-apiserver-7f6d76d589-52s9m eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali79afcd280dd [] []}} ContainerID="7885dd82e60c936051a8071a13b31dd5cdfdc687f24ec72ab1d146a6aed2f5a4" Namespace="calico-apiserver" Pod="calico-apiserver-7f6d76d589-52s9m" WorkloadEndpoint="ci--3510.3.2--a--d10cdd880c-k8s-calico--apiserver--7f6d76d589--52s9m-" Feb 9 10:00:12.402440 env[1448]: 2024-02-09 10:00:12.249 [INFO][5277] k8s.go 76: Extracted identifiers for CmdAddK8s ContainerID="7885dd82e60c936051a8071a13b31dd5cdfdc687f24ec72ab1d146a6aed2f5a4" Namespace="calico-apiserver" Pod="calico-apiserver-7f6d76d589-52s9m" WorkloadEndpoint="ci--3510.3.2--a--d10cdd880c-k8s-calico--apiserver--7f6d76d589--52s9m-eth0" Feb 9 10:00:12.402440 env[1448]: 2024-02-09 10:00:12.310 [INFO][5302] ipam_plugin.go 228: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7885dd82e60c936051a8071a13b31dd5cdfdc687f24ec72ab1d146a6aed2f5a4" HandleID="k8s-pod-network.7885dd82e60c936051a8071a13b31dd5cdfdc687f24ec72ab1d146a6aed2f5a4" Workload="ci--3510.3.2--a--d10cdd880c-k8s-calico--apiserver--7f6d76d589--52s9m-eth0" Feb 9 10:00:12.402440 env[1448]: 2024-02-09 10:00:12.324 [INFO][5302] ipam_plugin.go 268: Auto assigning IP ContainerID="7885dd82e60c936051a8071a13b31dd5cdfdc687f24ec72ab1d146a6aed2f5a4" HandleID="k8s-pod-network.7885dd82e60c936051a8071a13b31dd5cdfdc687f24ec72ab1d146a6aed2f5a4" Workload="ci--3510.3.2--a--d10cdd880c-k8s-calico--apiserver--7f6d76d589--52s9m-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000316bf0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-3510.3.2-a-d10cdd880c", "pod":"calico-apiserver-7f6d76d589-52s9m", "timestamp":"2024-02-09 10:00:12.30999363 +0000 UTC"}, Hostname:"ci-3510.3.2-a-d10cdd880c", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 9 10:00:12.402440 env[1448]: 2024-02-09 10:00:12.324 [INFO][5302] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 9 10:00:12.402440 env[1448]: 2024-02-09 10:00:12.324 [INFO][5302] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 9 10:00:12.402440 env[1448]: 2024-02-09 10:00:12.324 [INFO][5302] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3510.3.2-a-d10cdd880c' Feb 9 10:00:12.402440 env[1448]: 2024-02-09 10:00:12.325 [INFO][5302] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.7885dd82e60c936051a8071a13b31dd5cdfdc687f24ec72ab1d146a6aed2f5a4" host="ci-3510.3.2-a-d10cdd880c" Feb 9 10:00:12.402440 env[1448]: 2024-02-09 10:00:12.332 [INFO][5302] ipam.go 372: Looking up existing affinities for host host="ci-3510.3.2-a-d10cdd880c" Feb 9 10:00:12.402440 env[1448]: 2024-02-09 10:00:12.335 [INFO][5302] ipam.go 489: Trying affinity for 192.168.73.64/26 host="ci-3510.3.2-a-d10cdd880c" Feb 9 10:00:12.402440 env[1448]: 2024-02-09 10:00:12.336 [INFO][5302] ipam.go 155: Attempting to load block cidr=192.168.73.64/26 host="ci-3510.3.2-a-d10cdd880c" Feb 9 10:00:12.402440 env[1448]: 2024-02-09 10:00:12.339 [INFO][5302] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.73.64/26 host="ci-3510.3.2-a-d10cdd880c" Feb 9 10:00:12.402440 env[1448]: 2024-02-09 10:00:12.339 [INFO][5302] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.73.64/26 handle="k8s-pod-network.7885dd82e60c936051a8071a13b31dd5cdfdc687f24ec72ab1d146a6aed2f5a4" host="ci-3510.3.2-a-d10cdd880c" Feb 9 10:00:12.402440 env[1448]: 2024-02-09 10:00:12.340 [INFO][5302] ipam.go 1682: Creating new handle: k8s-pod-network.7885dd82e60c936051a8071a13b31dd5cdfdc687f24ec72ab1d146a6aed2f5a4 Feb 9 10:00:12.402440 env[1448]: 2024-02-09 10:00:12.343 [INFO][5302] ipam.go 1203: Writing block in order to claim IPs block=192.168.73.64/26 handle="k8s-pod-network.7885dd82e60c936051a8071a13b31dd5cdfdc687f24ec72ab1d146a6aed2f5a4" host="ci-3510.3.2-a-d10cdd880c" Feb 9 10:00:12.402440 env[1448]: 2024-02-09 10:00:12.347 [INFO][5302] ipam.go 1216: Successfully claimed IPs: [192.168.73.69/26] block=192.168.73.64/26 handle="k8s-pod-network.7885dd82e60c936051a8071a13b31dd5cdfdc687f24ec72ab1d146a6aed2f5a4" host="ci-3510.3.2-a-d10cdd880c" Feb 9 10:00:12.402440 env[1448]: 2024-02-09 10:00:12.348 [INFO][5302] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.73.69/26] handle="k8s-pod-network.7885dd82e60c936051a8071a13b31dd5cdfdc687f24ec72ab1d146a6aed2f5a4" host="ci-3510.3.2-a-d10cdd880c" Feb 9 10:00:12.402440 env[1448]: 2024-02-09 10:00:12.348 [INFO][5302] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 9 10:00:12.402440 env[1448]: 2024-02-09 10:00:12.348 [INFO][5302] ipam_plugin.go 286: Calico CNI IPAM assigned addresses IPv4=[192.168.73.69/26] IPv6=[] ContainerID="7885dd82e60c936051a8071a13b31dd5cdfdc687f24ec72ab1d146a6aed2f5a4" HandleID="k8s-pod-network.7885dd82e60c936051a8071a13b31dd5cdfdc687f24ec72ab1d146a6aed2f5a4" Workload="ci--3510.3.2--a--d10cdd880c-k8s-calico--apiserver--7f6d76d589--52s9m-eth0" Feb 9 10:00:12.403070 env[1448]: 2024-02-09 10:00:12.351 [INFO][5277] k8s.go 385: Populated endpoint ContainerID="7885dd82e60c936051a8071a13b31dd5cdfdc687f24ec72ab1d146a6aed2f5a4" Namespace="calico-apiserver" Pod="calico-apiserver-7f6d76d589-52s9m" WorkloadEndpoint="ci--3510.3.2--a--d10cdd880c-k8s-calico--apiserver--7f6d76d589--52s9m-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.2--a--d10cdd880c-k8s-calico--apiserver--7f6d76d589--52s9m-eth0", GenerateName:"calico-apiserver-7f6d76d589-", Namespace:"calico-apiserver", SelfLink:"", UID:"19203552-ba5a-46c5-a2d0-9a2f56ce43b3", ResourceVersion:"1008", Generation:0, CreationTimestamp:time.Date(2024, time.February, 9, 10, 0, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7f6d76d589", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.2-a-d10cdd880c", ContainerID:"", Pod:"calico-apiserver-7f6d76d589-52s9m", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.73.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali79afcd280dd", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 9 10:00:12.403070 env[1448]: 2024-02-09 10:00:12.351 [INFO][5277] k8s.go 386: Calico CNI using IPs: [192.168.73.69/32] ContainerID="7885dd82e60c936051a8071a13b31dd5cdfdc687f24ec72ab1d146a6aed2f5a4" Namespace="calico-apiserver" Pod="calico-apiserver-7f6d76d589-52s9m" WorkloadEndpoint="ci--3510.3.2--a--d10cdd880c-k8s-calico--apiserver--7f6d76d589--52s9m-eth0" Feb 9 10:00:12.403070 env[1448]: 2024-02-09 10:00:12.351 [INFO][5277] dataplane_linux.go 68: Setting the host side veth name to cali79afcd280dd ContainerID="7885dd82e60c936051a8071a13b31dd5cdfdc687f24ec72ab1d146a6aed2f5a4" Namespace="calico-apiserver" Pod="calico-apiserver-7f6d76d589-52s9m" WorkloadEndpoint="ci--3510.3.2--a--d10cdd880c-k8s-calico--apiserver--7f6d76d589--52s9m-eth0" Feb 9 10:00:12.403070 env[1448]: 2024-02-09 10:00:12.371 [INFO][5277] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="7885dd82e60c936051a8071a13b31dd5cdfdc687f24ec72ab1d146a6aed2f5a4" Namespace="calico-apiserver" Pod="calico-apiserver-7f6d76d589-52s9m" WorkloadEndpoint="ci--3510.3.2--a--d10cdd880c-k8s-calico--apiserver--7f6d76d589--52s9m-eth0" Feb 9 10:00:12.403070 env[1448]: 2024-02-09 10:00:12.375 [INFO][5277] k8s.go 413: Added Mac, interface name, and active container ID to endpoint ContainerID="7885dd82e60c936051a8071a13b31dd5cdfdc687f24ec72ab1d146a6aed2f5a4" Namespace="calico-apiserver" Pod="calico-apiserver-7f6d76d589-52s9m" WorkloadEndpoint="ci--3510.3.2--a--d10cdd880c-k8s-calico--apiserver--7f6d76d589--52s9m-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.2--a--d10cdd880c-k8s-calico--apiserver--7f6d76d589--52s9m-eth0", GenerateName:"calico-apiserver-7f6d76d589-", Namespace:"calico-apiserver", SelfLink:"", UID:"19203552-ba5a-46c5-a2d0-9a2f56ce43b3", ResourceVersion:"1008", Generation:0, CreationTimestamp:time.Date(2024, time.February, 9, 10, 0, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7f6d76d589", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.2-a-d10cdd880c", ContainerID:"7885dd82e60c936051a8071a13b31dd5cdfdc687f24ec72ab1d146a6aed2f5a4", Pod:"calico-apiserver-7f6d76d589-52s9m", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.73.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali79afcd280dd", MAC:"1a:db:ca:97:09:1a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 9 10:00:12.403070 env[1448]: 2024-02-09 10:00:12.390 [INFO][5277] k8s.go 491: Wrote updated endpoint to datastore ContainerID="7885dd82e60c936051a8071a13b31dd5cdfdc687f24ec72ab1d146a6aed2f5a4" Namespace="calico-apiserver" Pod="calico-apiserver-7f6d76d589-52s9m" WorkloadEndpoint="ci--3510.3.2--a--d10cdd880c-k8s-calico--apiserver--7f6d76d589--52s9m-eth0" Feb 9 10:00:12.427762 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali275cbb27e0e: link becomes ready Feb 9 10:00:12.428503 systemd-networkd[1590]: cali275cbb27e0e: Link UP Feb 9 10:00:12.429151 systemd-networkd[1590]: cali275cbb27e0e: Gained carrier Feb 9 10:00:12.450286 env[1448]: 2024-02-09 10:00:12.283 [INFO][5288] plugin.go 327: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3510.3.2--a--d10cdd880c-k8s-calico--apiserver--7f6d76d589--f54rv-eth0 calico-apiserver-7f6d76d589- calico-apiserver 5760aa1d-ce11-4869-ae2b-5efc8952120d 1011 0 2024-02-09 10:00:11 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7f6d76d589 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-3510.3.2-a-d10cdd880c calico-apiserver-7f6d76d589-f54rv eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali275cbb27e0e [] []}} ContainerID="eca9c3b499b1cc2d2ba88a82e8138e092ee5484305d41f28da3ccc9fff53736b" Namespace="calico-apiserver" Pod="calico-apiserver-7f6d76d589-f54rv" WorkloadEndpoint="ci--3510.3.2--a--d10cdd880c-k8s-calico--apiserver--7f6d76d589--f54rv-" Feb 9 10:00:12.450286 env[1448]: 2024-02-09 10:00:12.283 [INFO][5288] k8s.go 76: Extracted identifiers for CmdAddK8s ContainerID="eca9c3b499b1cc2d2ba88a82e8138e092ee5484305d41f28da3ccc9fff53736b" Namespace="calico-apiserver" Pod="calico-apiserver-7f6d76d589-f54rv" WorkloadEndpoint="ci--3510.3.2--a--d10cdd880c-k8s-calico--apiserver--7f6d76d589--f54rv-eth0" Feb 9 10:00:12.450286 env[1448]: 2024-02-09 10:00:12.337 [INFO][5308] ipam_plugin.go 228: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="eca9c3b499b1cc2d2ba88a82e8138e092ee5484305d41f28da3ccc9fff53736b" HandleID="k8s-pod-network.eca9c3b499b1cc2d2ba88a82e8138e092ee5484305d41f28da3ccc9fff53736b" Workload="ci--3510.3.2--a--d10cdd880c-k8s-calico--apiserver--7f6d76d589--f54rv-eth0" Feb 9 10:00:12.450286 env[1448]: 2024-02-09 10:00:12.361 [INFO][5308] ipam_plugin.go 268: Auto assigning IP ContainerID="eca9c3b499b1cc2d2ba88a82e8138e092ee5484305d41f28da3ccc9fff53736b" HandleID="k8s-pod-network.eca9c3b499b1cc2d2ba88a82e8138e092ee5484305d41f28da3ccc9fff53736b" Workload="ci--3510.3.2--a--d10cdd880c-k8s-calico--apiserver--7f6d76d589--f54rv-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004c0d0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-3510.3.2-a-d10cdd880c", "pod":"calico-apiserver-7f6d76d589-f54rv", "timestamp":"2024-02-09 10:00:12.337467612 +0000 UTC"}, Hostname:"ci-3510.3.2-a-d10cdd880c", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 9 10:00:12.450286 env[1448]: 2024-02-09 10:00:12.361 [INFO][5308] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 9 10:00:12.450286 env[1448]: 2024-02-09 10:00:12.361 [INFO][5308] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 9 10:00:12.450286 env[1448]: 2024-02-09 10:00:12.361 [INFO][5308] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3510.3.2-a-d10cdd880c' Feb 9 10:00:12.450286 env[1448]: 2024-02-09 10:00:12.363 [INFO][5308] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.eca9c3b499b1cc2d2ba88a82e8138e092ee5484305d41f28da3ccc9fff53736b" host="ci-3510.3.2-a-d10cdd880c" Feb 9 10:00:12.450286 env[1448]: 2024-02-09 10:00:12.380 [INFO][5308] ipam.go 372: Looking up existing affinities for host host="ci-3510.3.2-a-d10cdd880c" Feb 9 10:00:12.450286 env[1448]: 2024-02-09 10:00:12.393 [INFO][5308] ipam.go 489: Trying affinity for 192.168.73.64/26 host="ci-3510.3.2-a-d10cdd880c" Feb 9 10:00:12.450286 env[1448]: 2024-02-09 10:00:12.394 [INFO][5308] ipam.go 155: Attempting to load block cidr=192.168.73.64/26 host="ci-3510.3.2-a-d10cdd880c" Feb 9 10:00:12.450286 env[1448]: 2024-02-09 10:00:12.397 [INFO][5308] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.73.64/26 host="ci-3510.3.2-a-d10cdd880c" Feb 9 10:00:12.450286 env[1448]: 2024-02-09 10:00:12.397 [INFO][5308] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.73.64/26 handle="k8s-pod-network.eca9c3b499b1cc2d2ba88a82e8138e092ee5484305d41f28da3ccc9fff53736b" host="ci-3510.3.2-a-d10cdd880c" Feb 9 10:00:12.450286 env[1448]: 2024-02-09 10:00:12.400 [INFO][5308] ipam.go 1682: Creating new handle: k8s-pod-network.eca9c3b499b1cc2d2ba88a82e8138e092ee5484305d41f28da3ccc9fff53736b Feb 9 10:00:12.450286 env[1448]: 2024-02-09 10:00:12.405 [INFO][5308] ipam.go 1203: Writing block in order to claim IPs block=192.168.73.64/26 handle="k8s-pod-network.eca9c3b499b1cc2d2ba88a82e8138e092ee5484305d41f28da3ccc9fff53736b" host="ci-3510.3.2-a-d10cdd880c" Feb 9 10:00:12.450286 env[1448]: 2024-02-09 10:00:12.413 [INFO][5308] ipam.go 1216: Successfully claimed IPs: [192.168.73.70/26] block=192.168.73.64/26 handle="k8s-pod-network.eca9c3b499b1cc2d2ba88a82e8138e092ee5484305d41f28da3ccc9fff53736b" host="ci-3510.3.2-a-d10cdd880c" Feb 9 10:00:12.450286 env[1448]: 2024-02-09 10:00:12.413 [INFO][5308] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.73.70/26] handle="k8s-pod-network.eca9c3b499b1cc2d2ba88a82e8138e092ee5484305d41f28da3ccc9fff53736b" host="ci-3510.3.2-a-d10cdd880c" Feb 9 10:00:12.450286 env[1448]: 2024-02-09 10:00:12.413 [INFO][5308] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 9 10:00:12.450286 env[1448]: 2024-02-09 10:00:12.413 [INFO][5308] ipam_plugin.go 286: Calico CNI IPAM assigned addresses IPv4=[192.168.73.70/26] IPv6=[] ContainerID="eca9c3b499b1cc2d2ba88a82e8138e092ee5484305d41f28da3ccc9fff53736b" HandleID="k8s-pod-network.eca9c3b499b1cc2d2ba88a82e8138e092ee5484305d41f28da3ccc9fff53736b" Workload="ci--3510.3.2--a--d10cdd880c-k8s-calico--apiserver--7f6d76d589--f54rv-eth0" Feb 9 10:00:12.450873 env[1448]: 2024-02-09 10:00:12.415 [INFO][5288] k8s.go 385: Populated endpoint ContainerID="eca9c3b499b1cc2d2ba88a82e8138e092ee5484305d41f28da3ccc9fff53736b" Namespace="calico-apiserver" Pod="calico-apiserver-7f6d76d589-f54rv" WorkloadEndpoint="ci--3510.3.2--a--d10cdd880c-k8s-calico--apiserver--7f6d76d589--f54rv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.2--a--d10cdd880c-k8s-calico--apiserver--7f6d76d589--f54rv-eth0", GenerateName:"calico-apiserver-7f6d76d589-", Namespace:"calico-apiserver", SelfLink:"", UID:"5760aa1d-ce11-4869-ae2b-5efc8952120d", ResourceVersion:"1011", Generation:0, CreationTimestamp:time.Date(2024, time.February, 9, 10, 0, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7f6d76d589", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.2-a-d10cdd880c", ContainerID:"", Pod:"calico-apiserver-7f6d76d589-f54rv", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.73.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali275cbb27e0e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 9 10:00:12.450873 env[1448]: 2024-02-09 10:00:12.415 [INFO][5288] k8s.go 386: Calico CNI using IPs: [192.168.73.70/32] ContainerID="eca9c3b499b1cc2d2ba88a82e8138e092ee5484305d41f28da3ccc9fff53736b" Namespace="calico-apiserver" Pod="calico-apiserver-7f6d76d589-f54rv" WorkloadEndpoint="ci--3510.3.2--a--d10cdd880c-k8s-calico--apiserver--7f6d76d589--f54rv-eth0" Feb 9 10:00:12.450873 env[1448]: 2024-02-09 10:00:12.415 [INFO][5288] dataplane_linux.go 68: Setting the host side veth name to cali275cbb27e0e ContainerID="eca9c3b499b1cc2d2ba88a82e8138e092ee5484305d41f28da3ccc9fff53736b" Namespace="calico-apiserver" Pod="calico-apiserver-7f6d76d589-f54rv" WorkloadEndpoint="ci--3510.3.2--a--d10cdd880c-k8s-calico--apiserver--7f6d76d589--f54rv-eth0" Feb 9 10:00:12.450873 env[1448]: 2024-02-09 10:00:12.426 [INFO][5288] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="eca9c3b499b1cc2d2ba88a82e8138e092ee5484305d41f28da3ccc9fff53736b" Namespace="calico-apiserver" Pod="calico-apiserver-7f6d76d589-f54rv" WorkloadEndpoint="ci--3510.3.2--a--d10cdd880c-k8s-calico--apiserver--7f6d76d589--f54rv-eth0" Feb 9 10:00:12.450873 env[1448]: 2024-02-09 10:00:12.431 [INFO][5288] k8s.go 413: Added Mac, interface name, and active container ID to endpoint ContainerID="eca9c3b499b1cc2d2ba88a82e8138e092ee5484305d41f28da3ccc9fff53736b" Namespace="calico-apiserver" Pod="calico-apiserver-7f6d76d589-f54rv" WorkloadEndpoint="ci--3510.3.2--a--d10cdd880c-k8s-calico--apiserver--7f6d76d589--f54rv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.2--a--d10cdd880c-k8s-calico--apiserver--7f6d76d589--f54rv-eth0", GenerateName:"calico-apiserver-7f6d76d589-", Namespace:"calico-apiserver", SelfLink:"", UID:"5760aa1d-ce11-4869-ae2b-5efc8952120d", ResourceVersion:"1011", Generation:0, CreationTimestamp:time.Date(2024, time.February, 9, 10, 0, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7f6d76d589", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.2-a-d10cdd880c", ContainerID:"eca9c3b499b1cc2d2ba88a82e8138e092ee5484305d41f28da3ccc9fff53736b", Pod:"calico-apiserver-7f6d76d589-f54rv", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.73.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali275cbb27e0e", MAC:"3e:0d:42:70:d3:23", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 9 10:00:12.450873 env[1448]: 2024-02-09 10:00:12.442 [INFO][5288] k8s.go 491: Wrote updated endpoint to datastore ContainerID="eca9c3b499b1cc2d2ba88a82e8138e092ee5484305d41f28da3ccc9fff53736b" Namespace="calico-apiserver" Pod="calico-apiserver-7f6d76d589-f54rv" WorkloadEndpoint="ci--3510.3.2--a--d10cdd880c-k8s-calico--apiserver--7f6d76d589--f54rv-eth0" Feb 9 10:00:12.465000 audit[5351]: NETFILTER_CFG table=filter:143 family=2 entries=51 op=nft_register_chain pid=5351 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Feb 9 10:00:12.465000 audit[5351]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=26936 a0=3 a1=ffffc377f990 a2=0 a3=ffffaa8c1fa8 items=0 ppid=4307 pid=5351 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 10:00:12.465000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Feb 9 10:00:12.468932 env[1448]: time="2024-02-09T10:00:12.468843581Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 10:00:12.469046 env[1448]: time="2024-02-09T10:00:12.468938227Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 10:00:12.469046 env[1448]: time="2024-02-09T10:00:12.468965949Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 10:00:12.469185 env[1448]: time="2024-02-09T10:00:12.469126399Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/7885dd82e60c936051a8071a13b31dd5cdfdc687f24ec72ab1d146a6aed2f5a4 pid=5352 runtime=io.containerd.runc.v2 Feb 9 10:00:12.485000 audit[5381]: NETFILTER_CFG table=filter:144 family=2 entries=42 op=nft_register_chain pid=5381 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Feb 9 10:00:12.486988 env[1448]: time="2024-02-09T10:00:12.486921087Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 10:00:12.486988 env[1448]: time="2024-02-09T10:00:12.486963890Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 10:00:12.486988 env[1448]: time="2024-02-09T10:00:12.486975051Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 10:00:12.487355 env[1448]: time="2024-02-09T10:00:12.487276830Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/eca9c3b499b1cc2d2ba88a82e8138e092ee5484305d41f28da3ccc9fff53736b pid=5372 runtime=io.containerd.runc.v2 Feb 9 10:00:12.485000 audit[5381]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=22104 a0=3 a1=fffff3d33720 a2=0 a3=ffff8710afa8 items=0 ppid=4307 pid=5381 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 10:00:12.485000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Feb 9 10:00:12.569809 env[1448]: time="2024-02-09T10:00:12.569752419Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7f6d76d589-52s9m,Uid:19203552-ba5a-46c5-a2d0-9a2f56ce43b3,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"7885dd82e60c936051a8071a13b31dd5cdfdc687f24ec72ab1d146a6aed2f5a4\"" Feb 9 10:00:12.576592 env[1448]: time="2024-02-09T10:00:12.576548569Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7f6d76d589-f54rv,Uid:5760aa1d-ce11-4869-ae2b-5efc8952120d,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"eca9c3b499b1cc2d2ba88a82e8138e092ee5484305d41f28da3ccc9fff53736b\"" Feb 9 10:00:13.561471 systemd-networkd[1590]: cali79afcd280dd: Gained IPv6LL Feb 9 10:00:14.073418 systemd-networkd[1590]: cali275cbb27e0e: Gained IPv6LL Feb 9 10:00:17.339901 env[1448]: time="2024-02-09T10:00:17.339859132Z" level=info msg="StopPodSandbox for \"c1fe3348e28dbac7ff2cee1d7c38896695ba2af371345c60136ab6c6b466806f\"" Feb 9 10:00:17.404874 env[1448]: 2024-02-09 10:00:17.374 [WARNING][5450] k8s.go 542: CNI_CONTAINERID does not match WorkloadEndpoint ConainerID, don't delete WEP. ContainerID="c1fe3348e28dbac7ff2cee1d7c38896695ba2af371345c60136ab6c6b466806f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.2--a--d10cdd880c-k8s-coredns--787d4945fb--z55w7-eth0", GenerateName:"coredns-787d4945fb-", Namespace:"kube-system", SelfLink:"", UID:"ba7e5bc4-13aa-4925-8a28-9dfe806f0e3b", ResourceVersion:"912", Generation:0, CreationTimestamp:time.Date(2024, time.February, 9, 9, 58, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"787d4945fb", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.2-a-d10cdd880c", ContainerID:"de75578717a30796d2c89b5383300e1263c67d3cf095aa4b007c71ab770c60aa", Pod:"coredns-787d4945fb-z55w7", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.73.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali751b1c81fba", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 9 10:00:17.404874 env[1448]: 2024-02-09 10:00:17.375 [INFO][5450] k8s.go 578: Cleaning up netns ContainerID="c1fe3348e28dbac7ff2cee1d7c38896695ba2af371345c60136ab6c6b466806f" Feb 9 10:00:17.404874 env[1448]: 2024-02-09 10:00:17.375 [INFO][5450] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="c1fe3348e28dbac7ff2cee1d7c38896695ba2af371345c60136ab6c6b466806f" iface="eth0" netns="" Feb 9 10:00:17.404874 env[1448]: 2024-02-09 10:00:17.375 [INFO][5450] k8s.go 585: Releasing IP address(es) ContainerID="c1fe3348e28dbac7ff2cee1d7c38896695ba2af371345c60136ab6c6b466806f" Feb 9 10:00:17.404874 env[1448]: 2024-02-09 10:00:17.375 [INFO][5450] utils.go 188: Calico CNI releasing IP address ContainerID="c1fe3348e28dbac7ff2cee1d7c38896695ba2af371345c60136ab6c6b466806f" Feb 9 10:00:17.404874 env[1448]: 2024-02-09 10:00:17.392 [INFO][5456] ipam_plugin.go 415: Releasing address using handleID ContainerID="c1fe3348e28dbac7ff2cee1d7c38896695ba2af371345c60136ab6c6b466806f" HandleID="k8s-pod-network.c1fe3348e28dbac7ff2cee1d7c38896695ba2af371345c60136ab6c6b466806f" Workload="ci--3510.3.2--a--d10cdd880c-k8s-coredns--787d4945fb--z55w7-eth0" Feb 9 10:00:17.404874 env[1448]: 2024-02-09 10:00:17.392 [INFO][5456] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 9 10:00:17.404874 env[1448]: 2024-02-09 10:00:17.392 [INFO][5456] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 9 10:00:17.404874 env[1448]: 2024-02-09 10:00:17.401 [WARNING][5456] ipam_plugin.go 432: Asked to release address but it doesn't exist. Ignoring ContainerID="c1fe3348e28dbac7ff2cee1d7c38896695ba2af371345c60136ab6c6b466806f" HandleID="k8s-pod-network.c1fe3348e28dbac7ff2cee1d7c38896695ba2af371345c60136ab6c6b466806f" Workload="ci--3510.3.2--a--d10cdd880c-k8s-coredns--787d4945fb--z55w7-eth0" Feb 9 10:00:17.404874 env[1448]: 2024-02-09 10:00:17.401 [INFO][5456] ipam_plugin.go 443: Releasing address using workloadID ContainerID="c1fe3348e28dbac7ff2cee1d7c38896695ba2af371345c60136ab6c6b466806f" HandleID="k8s-pod-network.c1fe3348e28dbac7ff2cee1d7c38896695ba2af371345c60136ab6c6b466806f" Workload="ci--3510.3.2--a--d10cdd880c-k8s-coredns--787d4945fb--z55w7-eth0" Feb 9 10:00:17.404874 env[1448]: 2024-02-09 10:00:17.402 [INFO][5456] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 9 10:00:17.404874 env[1448]: 2024-02-09 10:00:17.403 [INFO][5450] k8s.go 591: Teardown processing complete. ContainerID="c1fe3348e28dbac7ff2cee1d7c38896695ba2af371345c60136ab6c6b466806f" Feb 9 10:00:17.405350 env[1448]: time="2024-02-09T10:00:17.404907060Z" level=info msg="TearDown network for sandbox \"c1fe3348e28dbac7ff2cee1d7c38896695ba2af371345c60136ab6c6b466806f\" successfully" Feb 9 10:00:17.405350 env[1448]: time="2024-02-09T10:00:17.404939461Z" level=info msg="StopPodSandbox for \"c1fe3348e28dbac7ff2cee1d7c38896695ba2af371345c60136ab6c6b466806f\" returns successfully" Feb 9 10:00:17.405538 env[1448]: time="2024-02-09T10:00:17.405506015Z" level=info msg="RemovePodSandbox for \"c1fe3348e28dbac7ff2cee1d7c38896695ba2af371345c60136ab6c6b466806f\"" Feb 9 10:00:17.405591 env[1448]: time="2024-02-09T10:00:17.405552818Z" level=info msg="Forcibly stopping sandbox \"c1fe3348e28dbac7ff2cee1d7c38896695ba2af371345c60136ab6c6b466806f\"" Feb 9 10:00:17.467457 env[1448]: 2024-02-09 10:00:17.438 [WARNING][5475] k8s.go 542: CNI_CONTAINERID does not match WorkloadEndpoint ConainerID, don't delete WEP. ContainerID="c1fe3348e28dbac7ff2cee1d7c38896695ba2af371345c60136ab6c6b466806f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.2--a--d10cdd880c-k8s-coredns--787d4945fb--z55w7-eth0", GenerateName:"coredns-787d4945fb-", Namespace:"kube-system", SelfLink:"", UID:"ba7e5bc4-13aa-4925-8a28-9dfe806f0e3b", ResourceVersion:"912", Generation:0, CreationTimestamp:time.Date(2024, time.February, 9, 9, 58, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"787d4945fb", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.2-a-d10cdd880c", ContainerID:"de75578717a30796d2c89b5383300e1263c67d3cf095aa4b007c71ab770c60aa", Pod:"coredns-787d4945fb-z55w7", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.73.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali751b1c81fba", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 9 10:00:17.467457 env[1448]: 2024-02-09 10:00:17.438 [INFO][5475] k8s.go 578: Cleaning up netns ContainerID="c1fe3348e28dbac7ff2cee1d7c38896695ba2af371345c60136ab6c6b466806f" Feb 9 10:00:17.467457 env[1448]: 2024-02-09 10:00:17.438 [INFO][5475] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="c1fe3348e28dbac7ff2cee1d7c38896695ba2af371345c60136ab6c6b466806f" iface="eth0" netns="" Feb 9 10:00:17.467457 env[1448]: 2024-02-09 10:00:17.438 [INFO][5475] k8s.go 585: Releasing IP address(es) ContainerID="c1fe3348e28dbac7ff2cee1d7c38896695ba2af371345c60136ab6c6b466806f" Feb 9 10:00:17.467457 env[1448]: 2024-02-09 10:00:17.438 [INFO][5475] utils.go 188: Calico CNI releasing IP address ContainerID="c1fe3348e28dbac7ff2cee1d7c38896695ba2af371345c60136ab6c6b466806f" Feb 9 10:00:17.467457 env[1448]: 2024-02-09 10:00:17.454 [INFO][5481] ipam_plugin.go 415: Releasing address using handleID ContainerID="c1fe3348e28dbac7ff2cee1d7c38896695ba2af371345c60136ab6c6b466806f" HandleID="k8s-pod-network.c1fe3348e28dbac7ff2cee1d7c38896695ba2af371345c60136ab6c6b466806f" Workload="ci--3510.3.2--a--d10cdd880c-k8s-coredns--787d4945fb--z55w7-eth0" Feb 9 10:00:17.467457 env[1448]: 2024-02-09 10:00:17.454 [INFO][5481] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 9 10:00:17.467457 env[1448]: 2024-02-09 10:00:17.454 [INFO][5481] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 9 10:00:17.467457 env[1448]: 2024-02-09 10:00:17.463 [WARNING][5481] ipam_plugin.go 432: Asked to release address but it doesn't exist. Ignoring ContainerID="c1fe3348e28dbac7ff2cee1d7c38896695ba2af371345c60136ab6c6b466806f" HandleID="k8s-pod-network.c1fe3348e28dbac7ff2cee1d7c38896695ba2af371345c60136ab6c6b466806f" Workload="ci--3510.3.2--a--d10cdd880c-k8s-coredns--787d4945fb--z55w7-eth0" Feb 9 10:00:17.467457 env[1448]: 2024-02-09 10:00:17.463 [INFO][5481] ipam_plugin.go 443: Releasing address using workloadID ContainerID="c1fe3348e28dbac7ff2cee1d7c38896695ba2af371345c60136ab6c6b466806f" HandleID="k8s-pod-network.c1fe3348e28dbac7ff2cee1d7c38896695ba2af371345c60136ab6c6b466806f" Workload="ci--3510.3.2--a--d10cdd880c-k8s-coredns--787d4945fb--z55w7-eth0" Feb 9 10:00:17.467457 env[1448]: 2024-02-09 10:00:17.465 [INFO][5481] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 9 10:00:17.467457 env[1448]: 2024-02-09 10:00:17.466 [INFO][5475] k8s.go 591: Teardown processing complete. ContainerID="c1fe3348e28dbac7ff2cee1d7c38896695ba2af371345c60136ab6c6b466806f" Feb 9 10:00:17.467976 env[1448]: time="2024-02-09T10:00:17.467930706Z" level=info msg="TearDown network for sandbox \"c1fe3348e28dbac7ff2cee1d7c38896695ba2af371345c60136ab6c6b466806f\" successfully" Feb 9 10:00:17.478081 env[1448]: time="2024-02-09T10:00:17.478035510Z" level=info msg="RemovePodSandbox \"c1fe3348e28dbac7ff2cee1d7c38896695ba2af371345c60136ab6c6b466806f\" returns successfully" Feb 9 10:00:17.478690 env[1448]: time="2024-02-09T10:00:17.478658747Z" level=info msg="StopPodSandbox for \"f952357642f85bdc2d65969e3eb0888ccda29b1685a587266a7028b8e593a917\"" Feb 9 10:00:17.546043 env[1448]: 2024-02-09 10:00:17.513 [WARNING][5500] k8s.go 542: CNI_CONTAINERID does not match WorkloadEndpoint ConainerID, don't delete WEP. ContainerID="f952357642f85bdc2d65969e3eb0888ccda29b1685a587266a7028b8e593a917" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.2--a--d10cdd880c-k8s-calico--kube--controllers--66ffdb5668--zhk7s-eth0", GenerateName:"calico-kube-controllers-66ffdb5668-", Namespace:"calico-system", SelfLink:"", UID:"089a4bc9-3ee0-4ac6-b6dd-cc5465ed5f4b", ResourceVersion:"939", Generation:0, CreationTimestamp:time.Date(2024, time.February, 9, 9, 58, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"66ffdb5668", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.2-a-d10cdd880c", ContainerID:"8c1816897da65e75c5d266ed093c06abc98a477be01a6342776fb608e8e3df60", Pod:"calico-kube-controllers-66ffdb5668-zhk7s", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.73.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali4d0d1b78371", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 9 10:00:17.546043 env[1448]: 2024-02-09 10:00:17.513 [INFO][5500] k8s.go 578: Cleaning up netns ContainerID="f952357642f85bdc2d65969e3eb0888ccda29b1685a587266a7028b8e593a917" Feb 9 10:00:17.546043 env[1448]: 2024-02-09 10:00:17.513 [INFO][5500] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="f952357642f85bdc2d65969e3eb0888ccda29b1685a587266a7028b8e593a917" iface="eth0" netns="" Feb 9 10:00:17.546043 env[1448]: 2024-02-09 10:00:17.513 [INFO][5500] k8s.go 585: Releasing IP address(es) ContainerID="f952357642f85bdc2d65969e3eb0888ccda29b1685a587266a7028b8e593a917" Feb 9 10:00:17.546043 env[1448]: 2024-02-09 10:00:17.513 [INFO][5500] utils.go 188: Calico CNI releasing IP address ContainerID="f952357642f85bdc2d65969e3eb0888ccda29b1685a587266a7028b8e593a917" Feb 9 10:00:17.546043 env[1448]: 2024-02-09 10:00:17.530 [INFO][5508] ipam_plugin.go 415: Releasing address using handleID ContainerID="f952357642f85bdc2d65969e3eb0888ccda29b1685a587266a7028b8e593a917" HandleID="k8s-pod-network.f952357642f85bdc2d65969e3eb0888ccda29b1685a587266a7028b8e593a917" Workload="ci--3510.3.2--a--d10cdd880c-k8s-calico--kube--controllers--66ffdb5668--zhk7s-eth0" Feb 9 10:00:17.546043 env[1448]: 2024-02-09 10:00:17.530 [INFO][5508] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 9 10:00:17.546043 env[1448]: 2024-02-09 10:00:17.530 [INFO][5508] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 9 10:00:17.546043 env[1448]: 2024-02-09 10:00:17.541 [WARNING][5508] ipam_plugin.go 432: Asked to release address but it doesn't exist. Ignoring ContainerID="f952357642f85bdc2d65969e3eb0888ccda29b1685a587266a7028b8e593a917" HandleID="k8s-pod-network.f952357642f85bdc2d65969e3eb0888ccda29b1685a587266a7028b8e593a917" Workload="ci--3510.3.2--a--d10cdd880c-k8s-calico--kube--controllers--66ffdb5668--zhk7s-eth0" Feb 9 10:00:17.546043 env[1448]: 2024-02-09 10:00:17.541 [INFO][5508] ipam_plugin.go 443: Releasing address using workloadID ContainerID="f952357642f85bdc2d65969e3eb0888ccda29b1685a587266a7028b8e593a917" HandleID="k8s-pod-network.f952357642f85bdc2d65969e3eb0888ccda29b1685a587266a7028b8e593a917" Workload="ci--3510.3.2--a--d10cdd880c-k8s-calico--kube--controllers--66ffdb5668--zhk7s-eth0" Feb 9 10:00:17.546043 env[1448]: 2024-02-09 10:00:17.543 [INFO][5508] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 9 10:00:17.546043 env[1448]: 2024-02-09 10:00:17.544 [INFO][5500] k8s.go 591: Teardown processing complete. ContainerID="f952357642f85bdc2d65969e3eb0888ccda29b1685a587266a7028b8e593a917" Feb 9 10:00:17.546543 env[1448]: time="2024-02-09T10:00:17.546069295Z" level=info msg="TearDown network for sandbox \"f952357642f85bdc2d65969e3eb0888ccda29b1685a587266a7028b8e593a917\" successfully" Feb 9 10:00:17.546543 env[1448]: time="2024-02-09T10:00:17.546097337Z" level=info msg="StopPodSandbox for \"f952357642f85bdc2d65969e3eb0888ccda29b1685a587266a7028b8e593a917\" returns successfully" Feb 9 10:00:17.546739 env[1448]: time="2024-02-09T10:00:17.546710134Z" level=info msg="RemovePodSandbox for \"f952357642f85bdc2d65969e3eb0888ccda29b1685a587266a7028b8e593a917\"" Feb 9 10:00:17.546863 env[1448]: time="2024-02-09T10:00:17.546827941Z" level=info msg="Forcibly stopping sandbox \"f952357642f85bdc2d65969e3eb0888ccda29b1685a587266a7028b8e593a917\"" Feb 9 10:00:17.613112 env[1448]: 2024-02-09 10:00:17.580 [WARNING][5527] k8s.go 542: CNI_CONTAINERID does not match WorkloadEndpoint ConainerID, don't delete WEP. ContainerID="f952357642f85bdc2d65969e3eb0888ccda29b1685a587266a7028b8e593a917" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.2--a--d10cdd880c-k8s-calico--kube--controllers--66ffdb5668--zhk7s-eth0", GenerateName:"calico-kube-controllers-66ffdb5668-", Namespace:"calico-system", SelfLink:"", UID:"089a4bc9-3ee0-4ac6-b6dd-cc5465ed5f4b", ResourceVersion:"939", Generation:0, CreationTimestamp:time.Date(2024, time.February, 9, 9, 58, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"66ffdb5668", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.2-a-d10cdd880c", ContainerID:"8c1816897da65e75c5d266ed093c06abc98a477be01a6342776fb608e8e3df60", Pod:"calico-kube-controllers-66ffdb5668-zhk7s", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.73.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali4d0d1b78371", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 9 10:00:17.613112 env[1448]: 2024-02-09 10:00:17.580 [INFO][5527] k8s.go 578: Cleaning up netns ContainerID="f952357642f85bdc2d65969e3eb0888ccda29b1685a587266a7028b8e593a917" Feb 9 10:00:17.613112 env[1448]: 2024-02-09 10:00:17.580 [INFO][5527] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="f952357642f85bdc2d65969e3eb0888ccda29b1685a587266a7028b8e593a917" iface="eth0" netns="" Feb 9 10:00:17.613112 env[1448]: 2024-02-09 10:00:17.580 [INFO][5527] k8s.go 585: Releasing IP address(es) ContainerID="f952357642f85bdc2d65969e3eb0888ccda29b1685a587266a7028b8e593a917" Feb 9 10:00:17.613112 env[1448]: 2024-02-09 10:00:17.580 [INFO][5527] utils.go 188: Calico CNI releasing IP address ContainerID="f952357642f85bdc2d65969e3eb0888ccda29b1685a587266a7028b8e593a917" Feb 9 10:00:17.613112 env[1448]: 2024-02-09 10:00:17.598 [INFO][5534] ipam_plugin.go 415: Releasing address using handleID ContainerID="f952357642f85bdc2d65969e3eb0888ccda29b1685a587266a7028b8e593a917" HandleID="k8s-pod-network.f952357642f85bdc2d65969e3eb0888ccda29b1685a587266a7028b8e593a917" Workload="ci--3510.3.2--a--d10cdd880c-k8s-calico--kube--controllers--66ffdb5668--zhk7s-eth0" Feb 9 10:00:17.613112 env[1448]: 2024-02-09 10:00:17.598 [INFO][5534] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 9 10:00:17.613112 env[1448]: 2024-02-09 10:00:17.598 [INFO][5534] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 9 10:00:17.613112 env[1448]: 2024-02-09 10:00:17.608 [WARNING][5534] ipam_plugin.go 432: Asked to release address but it doesn't exist. Ignoring ContainerID="f952357642f85bdc2d65969e3eb0888ccda29b1685a587266a7028b8e593a917" HandleID="k8s-pod-network.f952357642f85bdc2d65969e3eb0888ccda29b1685a587266a7028b8e593a917" Workload="ci--3510.3.2--a--d10cdd880c-k8s-calico--kube--controllers--66ffdb5668--zhk7s-eth0" Feb 9 10:00:17.613112 env[1448]: 2024-02-09 10:00:17.608 [INFO][5534] ipam_plugin.go 443: Releasing address using workloadID ContainerID="f952357642f85bdc2d65969e3eb0888ccda29b1685a587266a7028b8e593a917" HandleID="k8s-pod-network.f952357642f85bdc2d65969e3eb0888ccda29b1685a587266a7028b8e593a917" Workload="ci--3510.3.2--a--d10cdd880c-k8s-calico--kube--controllers--66ffdb5668--zhk7s-eth0" Feb 9 10:00:17.613112 env[1448]: 2024-02-09 10:00:17.609 [INFO][5534] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 9 10:00:17.613112 env[1448]: 2024-02-09 10:00:17.610 [INFO][5527] k8s.go 591: Teardown processing complete. ContainerID="f952357642f85bdc2d65969e3eb0888ccda29b1685a587266a7028b8e593a917" Feb 9 10:00:17.613112 env[1448]: time="2024-02-09T10:00:17.612080880Z" level=info msg="TearDown network for sandbox \"f952357642f85bdc2d65969e3eb0888ccda29b1685a587266a7028b8e593a917\" successfully" Feb 9 10:00:17.620486 env[1448]: time="2024-02-09T10:00:17.620433539Z" level=info msg="RemovePodSandbox \"f952357642f85bdc2d65969e3eb0888ccda29b1685a587266a7028b8e593a917\" returns successfully" Feb 9 10:00:17.620941 env[1448]: time="2024-02-09T10:00:17.620912768Z" level=info msg="StopPodSandbox for \"6de8df183dc36f2008f8e0c428cedb49e0617045e5090d041fd1907e127148e6\"" Feb 9 10:00:17.683994 env[1448]: 2024-02-09 10:00:17.653 [WARNING][5553] k8s.go 542: CNI_CONTAINERID does not match WorkloadEndpoint ConainerID, don't delete WEP. ContainerID="6de8df183dc36f2008f8e0c428cedb49e0617045e5090d041fd1907e127148e6" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.2--a--d10cdd880c-k8s-coredns--787d4945fb--cd67q-eth0", GenerateName:"coredns-787d4945fb-", Namespace:"kube-system", SelfLink:"", UID:"f6e0c31d-dc6d-4ba7-8de2-7860fda55d58", ResourceVersion:"894", Generation:0, CreationTimestamp:time.Date(2024, time.February, 9, 9, 58, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"787d4945fb", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.2-a-d10cdd880c", ContainerID:"1f023ddaa8125371f847333c74154d471b9462ad992af6d474f91caac4788ddf", Pod:"coredns-787d4945fb-cd67q", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.73.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali73f928224c1", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 9 10:00:17.683994 env[1448]: 2024-02-09 10:00:17.654 [INFO][5553] k8s.go 578: Cleaning up netns ContainerID="6de8df183dc36f2008f8e0c428cedb49e0617045e5090d041fd1907e127148e6" Feb 9 10:00:17.683994 env[1448]: 2024-02-09 10:00:17.654 [INFO][5553] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="6de8df183dc36f2008f8e0c428cedb49e0617045e5090d041fd1907e127148e6" iface="eth0" netns="" Feb 9 10:00:17.683994 env[1448]: 2024-02-09 10:00:17.654 [INFO][5553] k8s.go 585: Releasing IP address(es) ContainerID="6de8df183dc36f2008f8e0c428cedb49e0617045e5090d041fd1907e127148e6" Feb 9 10:00:17.683994 env[1448]: 2024-02-09 10:00:17.654 [INFO][5553] utils.go 188: Calico CNI releasing IP address ContainerID="6de8df183dc36f2008f8e0c428cedb49e0617045e5090d041fd1907e127148e6" Feb 9 10:00:17.683994 env[1448]: 2024-02-09 10:00:17.671 [INFO][5559] ipam_plugin.go 415: Releasing address using handleID ContainerID="6de8df183dc36f2008f8e0c428cedb49e0617045e5090d041fd1907e127148e6" HandleID="k8s-pod-network.6de8df183dc36f2008f8e0c428cedb49e0617045e5090d041fd1907e127148e6" Workload="ci--3510.3.2--a--d10cdd880c-k8s-coredns--787d4945fb--cd67q-eth0" Feb 9 10:00:17.683994 env[1448]: 2024-02-09 10:00:17.671 [INFO][5559] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 9 10:00:17.683994 env[1448]: 2024-02-09 10:00:17.671 [INFO][5559] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 9 10:00:17.683994 env[1448]: 2024-02-09 10:00:17.680 [WARNING][5559] ipam_plugin.go 432: Asked to release address but it doesn't exist. Ignoring ContainerID="6de8df183dc36f2008f8e0c428cedb49e0617045e5090d041fd1907e127148e6" HandleID="k8s-pod-network.6de8df183dc36f2008f8e0c428cedb49e0617045e5090d041fd1907e127148e6" Workload="ci--3510.3.2--a--d10cdd880c-k8s-coredns--787d4945fb--cd67q-eth0" Feb 9 10:00:17.683994 env[1448]: 2024-02-09 10:00:17.680 [INFO][5559] ipam_plugin.go 443: Releasing address using workloadID ContainerID="6de8df183dc36f2008f8e0c428cedb49e0617045e5090d041fd1907e127148e6" HandleID="k8s-pod-network.6de8df183dc36f2008f8e0c428cedb49e0617045e5090d041fd1907e127148e6" Workload="ci--3510.3.2--a--d10cdd880c-k8s-coredns--787d4945fb--cd67q-eth0" Feb 9 10:00:17.683994 env[1448]: 2024-02-09 10:00:17.681 [INFO][5559] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 9 10:00:17.683994 env[1448]: 2024-02-09 10:00:17.682 [INFO][5553] k8s.go 591: Teardown processing complete. ContainerID="6de8df183dc36f2008f8e0c428cedb49e0617045e5090d041fd1907e127148e6" Feb 9 10:00:17.684457 env[1448]: time="2024-02-09T10:00:17.684031900Z" level=info msg="TearDown network for sandbox \"6de8df183dc36f2008f8e0c428cedb49e0617045e5090d041fd1907e127148e6\" successfully" Feb 9 10:00:17.684457 env[1448]: time="2024-02-09T10:00:17.684061582Z" level=info msg="StopPodSandbox for \"6de8df183dc36f2008f8e0c428cedb49e0617045e5090d041fd1907e127148e6\" returns successfully" Feb 9 10:00:17.684930 env[1448]: time="2024-02-09T10:00:17.684897952Z" level=info msg="RemovePodSandbox for \"6de8df183dc36f2008f8e0c428cedb49e0617045e5090d041fd1907e127148e6\"" Feb 9 10:00:17.684995 env[1448]: time="2024-02-09T10:00:17.684948875Z" level=info msg="Forcibly stopping sandbox \"6de8df183dc36f2008f8e0c428cedb49e0617045e5090d041fd1907e127148e6\"" Feb 9 10:00:17.749783 env[1448]: 2024-02-09 10:00:17.716 [WARNING][5577] k8s.go 542: CNI_CONTAINERID does not match WorkloadEndpoint ConainerID, don't delete WEP. ContainerID="6de8df183dc36f2008f8e0c428cedb49e0617045e5090d041fd1907e127148e6" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.2--a--d10cdd880c-k8s-coredns--787d4945fb--cd67q-eth0", GenerateName:"coredns-787d4945fb-", Namespace:"kube-system", SelfLink:"", UID:"f6e0c31d-dc6d-4ba7-8de2-7860fda55d58", ResourceVersion:"894", Generation:0, CreationTimestamp:time.Date(2024, time.February, 9, 9, 58, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"787d4945fb", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.2-a-d10cdd880c", ContainerID:"1f023ddaa8125371f847333c74154d471b9462ad992af6d474f91caac4788ddf", Pod:"coredns-787d4945fb-cd67q", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.73.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali73f928224c1", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 9 10:00:17.749783 env[1448]: 2024-02-09 10:00:17.717 [INFO][5577] k8s.go 578: Cleaning up netns ContainerID="6de8df183dc36f2008f8e0c428cedb49e0617045e5090d041fd1907e127148e6" Feb 9 10:00:17.749783 env[1448]: 2024-02-09 10:00:17.717 [INFO][5577] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="6de8df183dc36f2008f8e0c428cedb49e0617045e5090d041fd1907e127148e6" iface="eth0" netns="" Feb 9 10:00:17.749783 env[1448]: 2024-02-09 10:00:17.717 [INFO][5577] k8s.go 585: Releasing IP address(es) ContainerID="6de8df183dc36f2008f8e0c428cedb49e0617045e5090d041fd1907e127148e6" Feb 9 10:00:17.749783 env[1448]: 2024-02-09 10:00:17.717 [INFO][5577] utils.go 188: Calico CNI releasing IP address ContainerID="6de8df183dc36f2008f8e0c428cedb49e0617045e5090d041fd1907e127148e6" Feb 9 10:00:17.749783 env[1448]: 2024-02-09 10:00:17.736 [INFO][5583] ipam_plugin.go 415: Releasing address using handleID ContainerID="6de8df183dc36f2008f8e0c428cedb49e0617045e5090d041fd1907e127148e6" HandleID="k8s-pod-network.6de8df183dc36f2008f8e0c428cedb49e0617045e5090d041fd1907e127148e6" Workload="ci--3510.3.2--a--d10cdd880c-k8s-coredns--787d4945fb--cd67q-eth0" Feb 9 10:00:17.749783 env[1448]: 2024-02-09 10:00:17.736 [INFO][5583] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 9 10:00:17.749783 env[1448]: 2024-02-09 10:00:17.736 [INFO][5583] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 9 10:00:17.749783 env[1448]: 2024-02-09 10:00:17.746 [WARNING][5583] ipam_plugin.go 432: Asked to release address but it doesn't exist. Ignoring ContainerID="6de8df183dc36f2008f8e0c428cedb49e0617045e5090d041fd1907e127148e6" HandleID="k8s-pod-network.6de8df183dc36f2008f8e0c428cedb49e0617045e5090d041fd1907e127148e6" Workload="ci--3510.3.2--a--d10cdd880c-k8s-coredns--787d4945fb--cd67q-eth0" Feb 9 10:00:17.749783 env[1448]: 2024-02-09 10:00:17.746 [INFO][5583] ipam_plugin.go 443: Releasing address using workloadID ContainerID="6de8df183dc36f2008f8e0c428cedb49e0617045e5090d041fd1907e127148e6" HandleID="k8s-pod-network.6de8df183dc36f2008f8e0c428cedb49e0617045e5090d041fd1907e127148e6" Workload="ci--3510.3.2--a--d10cdd880c-k8s-coredns--787d4945fb--cd67q-eth0" Feb 9 10:00:17.749783 env[1448]: 2024-02-09 10:00:17.747 [INFO][5583] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 9 10:00:17.749783 env[1448]: 2024-02-09 10:00:17.748 [INFO][5577] k8s.go 591: Teardown processing complete. ContainerID="6de8df183dc36f2008f8e0c428cedb49e0617045e5090d041fd1907e127148e6" Feb 9 10:00:17.750218 env[1448]: time="2024-02-09T10:00:17.749815591Z" level=info msg="TearDown network for sandbox \"6de8df183dc36f2008f8e0c428cedb49e0617045e5090d041fd1907e127148e6\" successfully" Feb 9 10:00:17.758103 env[1448]: time="2024-02-09T10:00:17.758056844Z" level=info msg="RemovePodSandbox \"6de8df183dc36f2008f8e0c428cedb49e0617045e5090d041fd1907e127148e6\" returns successfully" Feb 9 10:00:17.758598 env[1448]: time="2024-02-09T10:00:17.758574555Z" level=info msg="StopPodSandbox for \"f2085bca38ad5e1d2c534ce6eafaaca4bdc2cd8a2819f432643b3908fbc05dbc\"" Feb 9 10:00:17.864994 env[1448]: 2024-02-09 10:00:17.799 [WARNING][5601] k8s.go 542: CNI_CONTAINERID does not match WorkloadEndpoint ConainerID, don't delete WEP. ContainerID="f2085bca38ad5e1d2c534ce6eafaaca4bdc2cd8a2819f432643b3908fbc05dbc" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.2--a--d10cdd880c-k8s-csi--node--driver--s2mw5-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"99a671b0-f7e8-4988-baf5-8e0d96bfea44", ResourceVersion:"935", Generation:0, CreationTimestamp:time.Date(2024, time.February, 9, 9, 58, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"7c77f88967", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.2-a-d10cdd880c", ContainerID:"10603c3b8f20f726ea29b339e47081c742fb713ad651cd90d9c511fa5c1ce549", Pod:"csi-node-driver-s2mw5", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.73.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali12fd53d41fe", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 9 10:00:17.864994 env[1448]: 2024-02-09 10:00:17.799 [INFO][5601] k8s.go 578: Cleaning up netns ContainerID="f2085bca38ad5e1d2c534ce6eafaaca4bdc2cd8a2819f432643b3908fbc05dbc" Feb 9 10:00:17.864994 env[1448]: 2024-02-09 10:00:17.799 [INFO][5601] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="f2085bca38ad5e1d2c534ce6eafaaca4bdc2cd8a2819f432643b3908fbc05dbc" iface="eth0" netns="" Feb 9 10:00:17.864994 env[1448]: 2024-02-09 10:00:17.799 [INFO][5601] k8s.go 585: Releasing IP address(es) ContainerID="f2085bca38ad5e1d2c534ce6eafaaca4bdc2cd8a2819f432643b3908fbc05dbc" Feb 9 10:00:17.864994 env[1448]: 2024-02-09 10:00:17.799 [INFO][5601] utils.go 188: Calico CNI releasing IP address ContainerID="f2085bca38ad5e1d2c534ce6eafaaca4bdc2cd8a2819f432643b3908fbc05dbc" Feb 9 10:00:17.864994 env[1448]: 2024-02-09 10:00:17.838 [INFO][5607] ipam_plugin.go 415: Releasing address using handleID ContainerID="f2085bca38ad5e1d2c534ce6eafaaca4bdc2cd8a2819f432643b3908fbc05dbc" HandleID="k8s-pod-network.f2085bca38ad5e1d2c534ce6eafaaca4bdc2cd8a2819f432643b3908fbc05dbc" Workload="ci--3510.3.2--a--d10cdd880c-k8s-csi--node--driver--s2mw5-eth0" Feb 9 10:00:17.864994 env[1448]: 2024-02-09 10:00:17.839 [INFO][5607] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 9 10:00:17.864994 env[1448]: 2024-02-09 10:00:17.841 [INFO][5607] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 9 10:00:17.864994 env[1448]: 2024-02-09 10:00:17.851 [WARNING][5607] ipam_plugin.go 432: Asked to release address but it doesn't exist. Ignoring ContainerID="f2085bca38ad5e1d2c534ce6eafaaca4bdc2cd8a2819f432643b3908fbc05dbc" HandleID="k8s-pod-network.f2085bca38ad5e1d2c534ce6eafaaca4bdc2cd8a2819f432643b3908fbc05dbc" Workload="ci--3510.3.2--a--d10cdd880c-k8s-csi--node--driver--s2mw5-eth0" Feb 9 10:00:17.864994 env[1448]: 2024-02-09 10:00:17.851 [INFO][5607] ipam_plugin.go 443: Releasing address using workloadID ContainerID="f2085bca38ad5e1d2c534ce6eafaaca4bdc2cd8a2819f432643b3908fbc05dbc" HandleID="k8s-pod-network.f2085bca38ad5e1d2c534ce6eafaaca4bdc2cd8a2819f432643b3908fbc05dbc" Workload="ci--3510.3.2--a--d10cdd880c-k8s-csi--node--driver--s2mw5-eth0" Feb 9 10:00:17.864994 env[1448]: 2024-02-09 10:00:17.861 [INFO][5607] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 9 10:00:17.864994 env[1448]: 2024-02-09 10:00:17.862 [INFO][5601] k8s.go 591: Teardown processing complete. ContainerID="f2085bca38ad5e1d2c534ce6eafaaca4bdc2cd8a2819f432643b3908fbc05dbc" Feb 9 10:00:17.866253 env[1448]: time="2024-02-09T10:00:17.866193146Z" level=info msg="TearDown network for sandbox \"f2085bca38ad5e1d2c534ce6eafaaca4bdc2cd8a2819f432643b3908fbc05dbc\" successfully" Feb 9 10:00:17.867427 env[1448]: time="2024-02-09T10:00:17.867399138Z" level=info msg="StopPodSandbox for \"f2085bca38ad5e1d2c534ce6eafaaca4bdc2cd8a2819f432643b3908fbc05dbc\" returns successfully" Feb 9 10:00:17.867984 env[1448]: time="2024-02-09T10:00:17.867962092Z" level=info msg="RemovePodSandbox for \"f2085bca38ad5e1d2c534ce6eafaaca4bdc2cd8a2819f432643b3908fbc05dbc\"" Feb 9 10:00:17.868104 env[1448]: time="2024-02-09T10:00:17.868069738Z" level=info msg="Forcibly stopping sandbox \"f2085bca38ad5e1d2c534ce6eafaaca4bdc2cd8a2819f432643b3908fbc05dbc\"" Feb 9 10:00:17.971379 env[1448]: 2024-02-09 10:00:17.915 [WARNING][5626] k8s.go 542: CNI_CONTAINERID does not match WorkloadEndpoint ConainerID, don't delete WEP. ContainerID="f2085bca38ad5e1d2c534ce6eafaaca4bdc2cd8a2819f432643b3908fbc05dbc" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.2--a--d10cdd880c-k8s-csi--node--driver--s2mw5-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"99a671b0-f7e8-4988-baf5-8e0d96bfea44", ResourceVersion:"935", Generation:0, CreationTimestamp:time.Date(2024, time.February, 9, 9, 58, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"7c77f88967", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.2-a-d10cdd880c", ContainerID:"10603c3b8f20f726ea29b339e47081c742fb713ad651cd90d9c511fa5c1ce549", Pod:"csi-node-driver-s2mw5", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.73.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali12fd53d41fe", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 9 10:00:17.971379 env[1448]: 2024-02-09 10:00:17.915 [INFO][5626] k8s.go 578: Cleaning up netns ContainerID="f2085bca38ad5e1d2c534ce6eafaaca4bdc2cd8a2819f432643b3908fbc05dbc" Feb 9 10:00:17.971379 env[1448]: 2024-02-09 10:00:17.915 [INFO][5626] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="f2085bca38ad5e1d2c534ce6eafaaca4bdc2cd8a2819f432643b3908fbc05dbc" iface="eth0" netns="" Feb 9 10:00:17.971379 env[1448]: 2024-02-09 10:00:17.915 [INFO][5626] k8s.go 585: Releasing IP address(es) ContainerID="f2085bca38ad5e1d2c534ce6eafaaca4bdc2cd8a2819f432643b3908fbc05dbc" Feb 9 10:00:17.971379 env[1448]: 2024-02-09 10:00:17.915 [INFO][5626] utils.go 188: Calico CNI releasing IP address ContainerID="f2085bca38ad5e1d2c534ce6eafaaca4bdc2cd8a2819f432643b3908fbc05dbc" Feb 9 10:00:17.971379 env[1448]: 2024-02-09 10:00:17.954 [INFO][5632] ipam_plugin.go 415: Releasing address using handleID ContainerID="f2085bca38ad5e1d2c534ce6eafaaca4bdc2cd8a2819f432643b3908fbc05dbc" HandleID="k8s-pod-network.f2085bca38ad5e1d2c534ce6eafaaca4bdc2cd8a2819f432643b3908fbc05dbc" Workload="ci--3510.3.2--a--d10cdd880c-k8s-csi--node--driver--s2mw5-eth0" Feb 9 10:00:17.971379 env[1448]: 2024-02-09 10:00:17.955 [INFO][5632] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 9 10:00:17.971379 env[1448]: 2024-02-09 10:00:17.955 [INFO][5632] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 9 10:00:17.971379 env[1448]: 2024-02-09 10:00:17.966 [WARNING][5632] ipam_plugin.go 432: Asked to release address but it doesn't exist. Ignoring ContainerID="f2085bca38ad5e1d2c534ce6eafaaca4bdc2cd8a2819f432643b3908fbc05dbc" HandleID="k8s-pod-network.f2085bca38ad5e1d2c534ce6eafaaca4bdc2cd8a2819f432643b3908fbc05dbc" Workload="ci--3510.3.2--a--d10cdd880c-k8s-csi--node--driver--s2mw5-eth0" Feb 9 10:00:17.971379 env[1448]: 2024-02-09 10:00:17.966 [INFO][5632] ipam_plugin.go 443: Releasing address using workloadID ContainerID="f2085bca38ad5e1d2c534ce6eafaaca4bdc2cd8a2819f432643b3908fbc05dbc" HandleID="k8s-pod-network.f2085bca38ad5e1d2c534ce6eafaaca4bdc2cd8a2819f432643b3908fbc05dbc" Workload="ci--3510.3.2--a--d10cdd880c-k8s-csi--node--driver--s2mw5-eth0" Feb 9 10:00:17.971379 env[1448]: 2024-02-09 10:00:17.968 [INFO][5632] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 9 10:00:17.971379 env[1448]: 2024-02-09 10:00:17.970 [INFO][5626] k8s.go 591: Teardown processing complete. ContainerID="f2085bca38ad5e1d2c534ce6eafaaca4bdc2cd8a2819f432643b3908fbc05dbc" Feb 9 10:00:17.971880 env[1448]: time="2024-02-09T10:00:17.971848620Z" level=info msg="TearDown network for sandbox \"f2085bca38ad5e1d2c534ce6eafaaca4bdc2cd8a2819f432643b3908fbc05dbc\" successfully" Feb 9 10:00:17.985685 env[1448]: time="2024-02-09T10:00:17.985638484Z" level=info msg="RemovePodSandbox \"f2085bca38ad5e1d2c534ce6eafaaca4bdc2cd8a2819f432643b3908fbc05dbc\" returns successfully" Feb 9 10:00:20.638992 systemd[1]: run-containerd-runc-k8s.io-f5160b77947cadfadae2b25c5878a61febea174b54ea0da60bd689352dee80a8-runc.fb5l6X.mount: Deactivated successfully. Feb 9 10:00:20.655558 env[1448]: time="2024-02-09T10:00:20.655511105Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node-driver-registrar:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 10:00:20.664147 env[1448]: time="2024-02-09T10:00:20.664087720Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:9dbda087e98c46610fb8629cf530f1fe49eee4b17d2afe455664ca446ec39d43,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 10:00:20.668973 env[1448]: time="2024-02-09T10:00:20.668937240Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/node-driver-registrar:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 10:00:20.686375 env[1448]: time="2024-02-09T10:00:20.685997584Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node-driver-registrar@sha256:45a7aba6020a7cf7b866cb8a8d481b30c97e9b3407e1459aaa65a5b4cc06633a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 10:00:20.686487 env[1448]: time="2024-02-09T10:00:20.686411688Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.27.0\" returns image reference \"sha256:9dbda087e98c46610fb8629cf530f1fe49eee4b17d2afe455664ca446ec39d43\"" Feb 9 10:00:20.693089 env[1448]: time="2024-02-09T10:00:20.693052392Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.27.0\"" Feb 9 10:00:20.695601 env[1448]: time="2024-02-09T10:00:20.695571137Z" level=info msg="CreateContainer within sandbox \"10603c3b8f20f726ea29b339e47081c742fb713ad651cd90d9c511fa5c1ce549\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Feb 9 10:00:20.737570 env[1448]: time="2024-02-09T10:00:20.737516678Z" level=info msg="CreateContainer within sandbox \"10603c3b8f20f726ea29b339e47081c742fb713ad651cd90d9c511fa5c1ce549\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"d6878d3ecc4d1380dd4f47c8555813bb1dc6aaf224ce804310cac20baa71f453\"" Feb 9 10:00:20.738171 env[1448]: time="2024-02-09T10:00:20.738145394Z" level=info msg="StartContainer for \"d6878d3ecc4d1380dd4f47c8555813bb1dc6aaf224ce804310cac20baa71f453\"" Feb 9 10:00:20.799950 env[1448]: time="2024-02-09T10:00:20.799903438Z" level=info msg="StartContainer for \"d6878d3ecc4d1380dd4f47c8555813bb1dc6aaf224ce804310cac20baa71f453\" returns successfully" Feb 9 10:00:20.951713 kubelet[2631]: I0209 10:00:20.951587 2631 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/csi-node-driver-s2mw5" podStartSLOduration=-9.223371942903225e+09 pod.CreationTimestamp="2024-02-09 09:58:47 +0000 UTC" firstStartedPulling="2024-02-09 10:00:01.098502023 +0000 UTC m=+105.962686309" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 10:00:20.950486808 +0000 UTC m=+125.814671134" watchObservedRunningTime="2024-02-09 10:00:20.95155107 +0000 UTC m=+125.815735396" Feb 9 10:00:21.664957 kubelet[2631]: I0209 10:00:21.664688 2631 csi_plugin.go:99] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Feb 9 10:00:21.664957 kubelet[2631]: I0209 10:00:21.664727 2631 csi_plugin.go:112] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Feb 9 10:00:40.230886 env[1448]: time="2024-02-09T10:00:40.230820503Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/apiserver:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 10:00:40.238842 env[1448]: time="2024-02-09T10:00:40.238804073Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:24494ef6c7de0e2dcf21ad9fb6c94801c53f120443e256a5e1b54eccd57058a9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 10:00:40.242095 env[1448]: time="2024-02-09T10:00:40.242061584Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/apiserver:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 10:00:40.246876 env[1448]: time="2024-02-09T10:00:40.246842925Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/apiserver@sha256:5ff0bdc8d0b2e9d7819703b18867f60f9153ed01da81e2bbfa22002abec9dc26,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 10:00:40.247362 env[1448]: time="2024-02-09T10:00:40.247321987Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.27.0\" returns image reference \"sha256:24494ef6c7de0e2dcf21ad9fb6c94801c53f120443e256a5e1b54eccd57058a9\"" Feb 9 10:00:40.249143 env[1448]: time="2024-02-09T10:00:40.248620727Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.27.0\"" Feb 9 10:00:40.251363 env[1448]: time="2024-02-09T10:00:40.251295811Z" level=info msg="CreateContainer within sandbox \"7885dd82e60c936051a8071a13b31dd5cdfdc687f24ec72ab1d146a6aed2f5a4\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Feb 9 10:00:40.280067 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3188916692.mount: Deactivated successfully. Feb 9 10:00:40.294609 env[1448]: time="2024-02-09T10:00:40.294547252Z" level=info msg="CreateContainer within sandbox \"7885dd82e60c936051a8071a13b31dd5cdfdc687f24ec72ab1d146a6aed2f5a4\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"6af57b696b2eb8090e936cd3ee70390dff905cf992966c205ae64ad76299a3c7\"" Feb 9 10:00:40.295552 env[1448]: time="2024-02-09T10:00:40.295527457Z" level=info msg="StartContainer for \"6af57b696b2eb8090e936cd3ee70390dff905cf992966c205ae64ad76299a3c7\"" Feb 9 10:00:40.367596 env[1448]: time="2024-02-09T10:00:40.367545869Z" level=info msg="StartContainer for \"6af57b696b2eb8090e936cd3ee70390dff905cf992966c205ae64ad76299a3c7\" returns successfully" Feb 9 10:00:40.986647 kubelet[2631]: I0209 10:00:40.986615 2631 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-7f6d76d589-52s9m" podStartSLOduration=-9.223372006868206e+09 pod.CreationTimestamp="2024-02-09 10:00:11 +0000 UTC" firstStartedPulling="2024-02-09 10:00:12.571011098 +0000 UTC m=+117.435195424" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 10:00:40.984775027 +0000 UTC m=+145.848959353" watchObservedRunningTime="2024-02-09 10:00:40.98656891 +0000 UTC m=+145.850753236" Feb 9 10:00:41.068000 audit[5812]: NETFILTER_CFG table=filter:145 family=2 entries=8 op=nft_register_rule pid=5812 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 10:00:41.075957 kernel: kauditd_printk_skb: 8 callbacks suppressed Feb 9 10:00:41.076030 kernel: audit: type=1325 audit(1707472841.068:334): table=filter:145 family=2 entries=8 op=nft_register_rule pid=5812 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 10:00:41.068000 audit[5812]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2620 a0=3 a1=ffffcd2454f0 a2=0 a3=ffff99f176c0 items=0 ppid=2831 pid=5812 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 10:00:41.131837 kernel: audit: type=1300 audit(1707472841.068:334): arch=c00000b7 syscall=211 success=yes exit=2620 a0=3 a1=ffffcd2454f0 a2=0 a3=ffff99f176c0 items=0 ppid=2831 pid=5812 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 10:00:41.068000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 10:00:41.155173 kernel: audit: type=1327 audit(1707472841.068:334): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 10:00:41.093000 audit[5812]: NETFILTER_CFG table=nat:146 family=2 entries=78 op=nft_register_rule pid=5812 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 10:00:41.179471 kernel: audit: type=1325 audit(1707472841.093:335): table=nat:146 family=2 entries=78 op=nft_register_rule pid=5812 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 10:00:41.093000 audit[5812]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=24988 a0=3 a1=ffffcd2454f0 a2=0 a3=ffff99f176c0 items=0 ppid=2831 pid=5812 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 10:00:41.211255 kernel: audit: type=1300 audit(1707472841.093:335): arch=c00000b7 syscall=211 success=yes exit=24988 a0=3 a1=ffffcd2454f0 a2=0 a3=ffff99f176c0 items=0 ppid=2831 pid=5812 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 10:00:41.093000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 10:00:41.226564 kernel: audit: type=1327 audit(1707472841.093:335): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 10:00:41.818731 env[1448]: time="2024-02-09T10:00:41.818032465Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/apiserver:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 10:00:41.824866 env[1448]: time="2024-02-09T10:00:41.824812455Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:24494ef6c7de0e2dcf21ad9fb6c94801c53f120443e256a5e1b54eccd57058a9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 10:00:41.831466 env[1448]: time="2024-02-09T10:00:41.831422238Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/apiserver:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 10:00:41.837882 env[1448]: time="2024-02-09T10:00:41.837837131Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/apiserver@sha256:5ff0bdc8d0b2e9d7819703b18867f60f9153ed01da81e2bbfa22002abec9dc26,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 10:00:41.838555 env[1448]: time="2024-02-09T10:00:41.838516643Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.27.0\" returns image reference \"sha256:24494ef6c7de0e2dcf21ad9fb6c94801c53f120443e256a5e1b54eccd57058a9\"" Feb 9 10:00:41.840887 env[1448]: time="2024-02-09T10:00:41.840846589Z" level=info msg="CreateContainer within sandbox \"eca9c3b499b1cc2d2ba88a82e8138e092ee5484305d41f28da3ccc9fff53736b\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Feb 9 10:00:41.875839 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1253070583.mount: Deactivated successfully. Feb 9 10:00:41.886847 env[1448]: time="2024-02-09T10:00:41.886802693Z" level=info msg="CreateContainer within sandbox \"eca9c3b499b1cc2d2ba88a82e8138e092ee5484305d41f28da3ccc9fff53736b\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"aafa52ec73b08f9dc59dc73cccf5dabd5ddcc86eb38b64a54a20f4bbb5bf6295\"" Feb 9 10:00:41.887696 env[1448]: time="2024-02-09T10:00:41.887670093Z" level=info msg="StartContainer for \"aafa52ec73b08f9dc59dc73cccf5dabd5ddcc86eb38b64a54a20f4bbb5bf6295\"" Feb 9 10:00:41.981874 env[1448]: time="2024-02-09T10:00:41.981837404Z" level=info msg="StartContainer for \"aafa52ec73b08f9dc59dc73cccf5dabd5ddcc86eb38b64a54a20f4bbb5bf6295\" returns successfully" Feb 9 10:00:43.032000 audit[5875]: NETFILTER_CFG table=filter:147 family=2 entries=8 op=nft_register_rule pid=5875 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 10:00:43.032000 audit[5875]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2620 a0=3 a1=ffffc91cf900 a2=0 a3=ffff96cc56c0 items=0 ppid=2831 pid=5875 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 10:00:43.089928 kernel: audit: type=1325 audit(1707472843.032:336): table=filter:147 family=2 entries=8 op=nft_register_rule pid=5875 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 10:00:43.090063 kernel: audit: type=1300 audit(1707472843.032:336): arch=c00000b7 syscall=211 success=yes exit=2620 a0=3 a1=ffffc91cf900 a2=0 a3=ffff96cc56c0 items=0 ppid=2831 pid=5875 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 10:00:43.032000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 10:00:43.109375 kernel: audit: type=1327 audit(1707472843.032:336): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 10:00:43.038000 audit[5875]: NETFILTER_CFG table=nat:148 family=2 entries=78 op=nft_register_rule pid=5875 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 10:00:43.038000 audit[5875]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=24988 a0=3 a1=ffffc91cf900 a2=0 a3=ffff96cc56c0 items=0 ppid=2831 pid=5875 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 10:00:43.038000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 10:00:43.127373 kernel: audit: type=1325 audit(1707472843.038:337): table=nat:148 family=2 entries=78 op=nft_register_rule pid=5875 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 10:00:54.357000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.200.20.12:22-10.200.12.6:33808 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:00:54.357905 systemd[1]: Started sshd@7-10.200.20.12:22-10.200.12.6:33808.service. Feb 9 10:00:54.363550 kernel: kauditd_printk_skb: 2 callbacks suppressed Feb 9 10:00:54.363677 kernel: audit: type=1130 audit(1707472854.357:338): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.200.20.12:22-10.200.12.6:33808 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:00:54.826000 audit[5903]: USER_ACCT pid=5903 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 10:00:54.826961 sshd[5903]: Accepted publickey for core from 10.200.12.6 port 33808 ssh2: RSA SHA256:hRMUPydBxGv30tVhZeI0xMvD4t7pzUsgMKn9+EnuIn0 Feb 9 10:00:54.828800 sshd[5903]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 10:00:54.834329 systemd[1]: Started session-10.scope. Feb 9 10:00:54.835827 systemd-logind[1429]: New session 10 of user core. Feb 9 10:00:54.828000 audit[5903]: CRED_ACQ pid=5903 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 10:00:54.878740 kernel: audit: type=1101 audit(1707472854.826:339): pid=5903 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 10:00:54.878903 kernel: audit: type=1103 audit(1707472854.828:340): pid=5903 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 10:00:54.878938 kernel: audit: type=1006 audit(1707472854.828:341): pid=5903 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=10 res=1 Feb 9 10:00:54.828000 audit[5903]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffde608390 a2=3 a3=1 items=0 ppid=1 pid=5903 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 10:00:54.921193 kernel: audit: type=1300 audit(1707472854.828:341): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffde608390 a2=3 a3=1 items=0 ppid=1 pid=5903 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 10:00:54.921786 kernel: audit: type=1327 audit(1707472854.828:341): proctitle=737368643A20636F7265205B707269765D Feb 9 10:00:54.828000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Feb 9 10:00:54.930399 kernel: audit: type=1105 audit(1707472854.842:342): pid=5903 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 10:00:54.842000 audit[5903]: USER_START pid=5903 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 10:00:54.847000 audit[5906]: CRED_ACQ pid=5906 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 10:00:54.983137 kernel: audit: type=1103 audit(1707472854.847:343): pid=5906 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 10:00:55.289492 sshd[5903]: pam_unix(sshd:session): session closed for user core Feb 9 10:00:55.290000 audit[5903]: USER_END pid=5903 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 10:00:55.293588 systemd-logind[1429]: Session 10 logged out. Waiting for processes to exit. Feb 9 10:00:55.294963 systemd[1]: sshd@7-10.200.20.12:22-10.200.12.6:33808.service: Deactivated successfully. Feb 9 10:00:55.295866 systemd[1]: session-10.scope: Deactivated successfully. Feb 9 10:00:55.297363 systemd-logind[1429]: Removed session 10. Feb 9 10:00:55.290000 audit[5903]: CRED_DISP pid=5903 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 10:00:55.346659 kernel: audit: type=1106 audit(1707472855.290:344): pid=5903 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 10:00:55.346803 kernel: audit: type=1104 audit(1707472855.290:345): pid=5903 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 10:00:55.290000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.200.20.12:22-10.200.12.6:33808 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:01:00.365537 systemd[1]: Started sshd@8-10.200.20.12:22-10.200.12.6:44956.service. Feb 9 10:01:00.364000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.200.20.12:22-10.200.12.6:44956 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:01:00.373585 kernel: kauditd_printk_skb: 1 callbacks suppressed Feb 9 10:01:00.373693 kernel: audit: type=1130 audit(1707472860.364:347): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.200.20.12:22-10.200.12.6:44956 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:01:00.789000 audit[5918]: USER_ACCT pid=5918 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 10:01:00.791465 sshd[5918]: Accepted publickey for core from 10.200.12.6 port 44956 ssh2: RSA SHA256:hRMUPydBxGv30tVhZeI0xMvD4t7pzUsgMKn9+EnuIn0 Feb 9 10:01:00.818000 audit[5918]: CRED_ACQ pid=5918 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 10:01:00.820510 sshd[5918]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 10:01:00.825483 systemd[1]: Started session-11.scope. Feb 9 10:01:00.826351 systemd-logind[1429]: New session 11 of user core. Feb 9 10:01:00.846169 kernel: audit: type=1101 audit(1707472860.789:348): pid=5918 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 10:01:00.846454 kernel: audit: type=1103 audit(1707472860.818:349): pid=5918 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 10:01:00.846525 kernel: audit: type=1006 audit(1707472860.818:350): pid=5918 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=11 res=1 Feb 9 10:01:00.818000 audit[5918]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffe2743040 a2=3 a3=1 items=0 ppid=1 pid=5918 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 10:01:00.893067 kernel: audit: type=1300 audit(1707472860.818:350): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffe2743040 a2=3 a3=1 items=0 ppid=1 pid=5918 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 10:01:00.818000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Feb 9 10:01:00.903224 kernel: audit: type=1327 audit(1707472860.818:350): proctitle=737368643A20636F7265205B707269765D Feb 9 10:01:00.830000 audit[5918]: USER_START pid=5918 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 10:01:00.936269 kernel: audit: type=1105 audit(1707472860.830:351): pid=5918 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 10:01:00.936508 kernel: audit: type=1103 audit(1707472860.837:352): pid=5922 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 10:01:00.837000 audit[5922]: CRED_ACQ pid=5922 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 10:01:01.286702 sshd[5918]: pam_unix(sshd:session): session closed for user core Feb 9 10:01:01.286000 audit[5918]: USER_END pid=5918 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 10:01:01.289919 systemd[1]: sshd@8-10.200.20.12:22-10.200.12.6:44956.service: Deactivated successfully. Feb 9 10:01:01.290794 systemd[1]: session-11.scope: Deactivated successfully. Feb 9 10:01:01.286000 audit[5918]: CRED_DISP pid=5918 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 10:01:01.319638 systemd-logind[1429]: Session 11 logged out. Waiting for processes to exit. Feb 9 10:01:01.320578 systemd-logind[1429]: Removed session 11. Feb 9 10:01:01.344492 kernel: audit: type=1106 audit(1707472861.286:353): pid=5918 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 10:01:01.344633 kernel: audit: type=1104 audit(1707472861.286:354): pid=5918 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 10:01:01.286000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.200.20.12:22-10.200.12.6:44956 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:01:06.033232 systemd[1]: run-containerd-runc-k8s.io-91d63ce072a914b20a5508b3e18aa2e28df7768cd29968d1054fb74323b1398c-runc.M07ASH.mount: Deactivated successfully. Feb 9 10:01:06.363264 systemd[1]: Started sshd@9-10.200.20.12:22-10.200.12.6:44966.service. Feb 9 10:01:06.362000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.200.20.12:22-10.200.12.6:44966 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:01:06.370371 kernel: kauditd_printk_skb: 1 callbacks suppressed Feb 9 10:01:06.370492 kernel: audit: type=1130 audit(1707472866.362:356): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.200.20.12:22-10.200.12.6:44966 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:01:06.825000 audit[5957]: USER_ACCT pid=5957 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 10:01:06.827263 sshd[5957]: Accepted publickey for core from 10.200.12.6 port 44966 ssh2: RSA SHA256:hRMUPydBxGv30tVhZeI0xMvD4t7pzUsgMKn9+EnuIn0 Feb 9 10:01:06.863349 kernel: audit: type=1101 audit(1707472866.825:357): pid=5957 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 10:01:06.863451 kernel: audit: type=1103 audit(1707472866.861:358): pid=5957 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 10:01:06.861000 audit[5957]: CRED_ACQ pid=5957 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 10:01:06.863297 sshd[5957]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 10:01:06.913323 kernel: audit: type=1006 audit(1707472866.861:359): pid=5957 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=12 res=1 Feb 9 10:01:06.861000 audit[5957]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffdee23180 a2=3 a3=1 items=0 ppid=1 pid=5957 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 10:01:06.917402 systemd[1]: Started session-12.scope. Feb 9 10:01:06.918747 systemd-logind[1429]: New session 12 of user core. Feb 9 10:01:06.947100 kernel: audit: type=1300 audit(1707472866.861:359): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffdee23180 a2=3 a3=1 items=0 ppid=1 pid=5957 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 10:01:06.947481 kernel: audit: type=1327 audit(1707472866.861:359): proctitle=737368643A20636F7265205B707269765D Feb 9 10:01:06.861000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Feb 9 10:01:06.923000 audit[5957]: USER_START pid=5957 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 10:01:06.993828 kernel: audit: type=1105 audit(1707472866.923:360): pid=5957 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 10:01:06.925000 audit[5960]: CRED_ACQ pid=5960 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 10:01:07.022907 kernel: audit: type=1103 audit(1707472866.925:361): pid=5960 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 10:01:07.308844 sshd[5957]: pam_unix(sshd:session): session closed for user core Feb 9 10:01:07.308000 audit[5957]: USER_END pid=5957 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 10:01:07.312311 systemd-logind[1429]: Session 12 logged out. Waiting for processes to exit. Feb 9 10:01:07.314027 systemd[1]: sshd@9-10.200.20.12:22-10.200.12.6:44966.service: Deactivated successfully. Feb 9 10:01:07.314933 systemd[1]: session-12.scope: Deactivated successfully. Feb 9 10:01:07.316921 systemd-logind[1429]: Removed session 12. Feb 9 10:01:07.309000 audit[5957]: CRED_DISP pid=5957 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 10:01:07.368437 kernel: audit: type=1106 audit(1707472867.308:362): pid=5957 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 10:01:07.368587 kernel: audit: type=1104 audit(1707472867.309:363): pid=5957 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 10:01:07.313000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.200.20.12:22-10.200.12.6:44966 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:01:12.166217 systemd[1]: run-containerd-runc-k8s.io-6af57b696b2eb8090e936cd3ee70390dff905cf992966c205ae64ad76299a3c7-runc.iZwR8I.mount: Deactivated successfully. Feb 9 10:01:12.226964 kubelet[2631]: I0209 10:01:12.226411 2631 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-7f6d76d589-f54rv" podStartSLOduration=-9.223371975628412e+09 pod.CreationTimestamp="2024-02-09 10:00:11 +0000 UTC" firstStartedPulling="2024-02-09 10:00:12.578010942 +0000 UTC m=+117.442195268" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 10:00:42.998264427 +0000 UTC m=+147.862448753" watchObservedRunningTime="2024-02-09 10:01:12.22636271 +0000 UTC m=+177.090547036" Feb 9 10:01:12.305000 audit[6036]: NETFILTER_CFG table=filter:149 family=2 entries=7 op=nft_register_rule pid=6036 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 10:01:12.313647 kernel: kauditd_printk_skb: 1 callbacks suppressed Feb 9 10:01:12.313810 kernel: audit: type=1325 audit(1707472872.305:365): table=filter:149 family=2 entries=7 op=nft_register_rule pid=6036 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 10:01:12.305000 audit[6036]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=1916 a0=3 a1=ffffca5345f0 a2=0 a3=ffff88fd76c0 items=0 ppid=2831 pid=6036 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 10:01:12.368642 kernel: audit: type=1300 audit(1707472872.305:365): arch=c00000b7 syscall=211 success=yes exit=1916 a0=3 a1=ffffca5345f0 a2=0 a3=ffff88fd76c0 items=0 ppid=2831 pid=6036 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 10:01:12.305000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 10:01:12.385624 kernel: audit: type=1327 audit(1707472872.305:365): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 10:01:12.305000 audit[6036]: NETFILTER_CFG table=nat:150 family=2 entries=85 op=nft_register_chain pid=6036 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 10:01:12.389905 systemd[1]: Started sshd@10-10.200.20.12:22-10.200.12.6:36702.service. Feb 9 10:01:12.405631 kernel: audit: type=1325 audit(1707472872.305:366): table=nat:150 family=2 entries=85 op=nft_register_chain pid=6036 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 10:01:12.305000 audit[6036]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=28484 a0=3 a1=ffffca5345f0 a2=0 a3=ffff88fd76c0 items=0 ppid=2831 pid=6036 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 10:01:12.440703 kernel: audit: type=1300 audit(1707472872.305:366): arch=c00000b7 syscall=211 success=yes exit=28484 a0=3 a1=ffffca5345f0 a2=0 a3=ffff88fd76c0 items=0 ppid=2831 pid=6036 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 10:01:12.442438 kernel: audit: type=1327 audit(1707472872.305:366): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 10:01:12.305000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 10:01:12.384000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.200.20.12:22-10.200.12.6:36702 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:01:12.485091 kernel: audit: type=1130 audit(1707472872.384:367): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.200.20.12:22-10.200.12.6:36702 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:01:12.503000 audit[6063]: NETFILTER_CFG table=filter:151 family=2 entries=6 op=nft_register_rule pid=6063 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 10:01:12.503000 audit[6063]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=1916 a0=3 a1=ffffe1edfef0 a2=0 a3=ffffa32ea6c0 items=0 ppid=2831 pid=6063 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 10:01:12.559078 kernel: audit: type=1325 audit(1707472872.503:368): table=filter:151 family=2 entries=6 op=nft_register_rule pid=6063 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 10:01:12.559214 kernel: audit: type=1300 audit(1707472872.503:368): arch=c00000b7 syscall=211 success=yes exit=1916 a0=3 a1=ffffe1edfef0 a2=0 a3=ffffa32ea6c0 items=0 ppid=2831 pid=6063 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 10:01:12.503000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 10:01:12.576155 kernel: audit: type=1327 audit(1707472872.503:368): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 10:01:12.523000 audit[6063]: NETFILTER_CFG table=nat:152 family=2 entries=92 op=nft_register_chain pid=6063 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 10:01:12.523000 audit[6063]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=30372 a0=3 a1=ffffe1edfef0 a2=0 a3=ffffa32ea6c0 items=0 ppid=2831 pid=6063 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 10:01:12.523000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 10:01:12.922205 sshd[6038]: Accepted publickey for core from 10.200.12.6 port 36702 ssh2: RSA SHA256:hRMUPydBxGv30tVhZeI0xMvD4t7pzUsgMKn9+EnuIn0 Feb 9 10:01:12.920000 audit[6038]: USER_ACCT pid=6038 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 10:01:12.924153 sshd[6038]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 10:01:12.922000 audit[6038]: CRED_ACQ pid=6038 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 10:01:12.922000 audit[6038]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffcbffbcc0 a2=3 a3=1 items=0 ppid=1 pid=6038 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 10:01:12.922000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Feb 9 10:01:12.928733 systemd-logind[1429]: New session 13 of user core. Feb 9 10:01:12.929213 systemd[1]: Started session-13.scope. Feb 9 10:01:12.933000 audit[6038]: USER_START pid=6038 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 10:01:12.934000 audit[6066]: CRED_ACQ pid=6066 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 10:01:13.161890 systemd[1]: run-containerd-runc-k8s.io-aafa52ec73b08f9dc59dc73cccf5dabd5ddcc86eb38b64a54a20f4bbb5bf6295-runc.K7CWlD.mount: Deactivated successfully. Feb 9 10:01:13.349813 sshd[6038]: pam_unix(sshd:session): session closed for user core Feb 9 10:01:13.349000 audit[6038]: USER_END pid=6038 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 10:01:13.349000 audit[6038]: CRED_DISP pid=6038 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 10:01:13.351000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.200.20.12:22-10.200.12.6:36702 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:01:13.352554 systemd-logind[1429]: Session 13 logged out. Waiting for processes to exit. Feb 9 10:01:13.352787 systemd[1]: sshd@10-10.200.20.12:22-10.200.12.6:36702.service: Deactivated successfully. Feb 9 10:01:13.353753 systemd[1]: session-13.scope: Deactivated successfully. Feb 9 10:01:13.354236 systemd-logind[1429]: Removed session 13. Feb 9 10:01:13.424975 systemd[1]: Started sshd@11-10.200.20.12:22-10.200.12.6:36706.service. Feb 9 10:01:13.424000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.200.20.12:22-10.200.12.6:36706 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:01:13.845000 audit[6076]: USER_ACCT pid=6076 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 10:01:13.846786 sshd[6076]: Accepted publickey for core from 10.200.12.6 port 36706 ssh2: RSA SHA256:hRMUPydBxGv30tVhZeI0xMvD4t7pzUsgMKn9+EnuIn0 Feb 9 10:01:13.846000 audit[6076]: CRED_ACQ pid=6076 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 10:01:13.846000 audit[6076]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffc0f76ba0 a2=3 a3=1 items=0 ppid=1 pid=6076 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 10:01:13.846000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Feb 9 10:01:13.848206 sshd[6076]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 10:01:13.852202 systemd-logind[1429]: New session 14 of user core. Feb 9 10:01:13.852720 systemd[1]: Started session-14.scope. Feb 9 10:01:13.856000 audit[6076]: USER_START pid=6076 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 10:01:13.857000 audit[6079]: CRED_ACQ pid=6079 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 10:01:15.154503 sshd[6076]: pam_unix(sshd:session): session closed for user core Feb 9 10:01:15.155000 audit[6076]: USER_END pid=6076 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 10:01:15.155000 audit[6076]: CRED_DISP pid=6076 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 10:01:15.157000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.200.20.12:22-10.200.12.6:36706 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:01:15.157162 systemd[1]: sshd@11-10.200.20.12:22-10.200.12.6:36706.service: Deactivated successfully. Feb 9 10:01:15.158334 systemd[1]: session-14.scope: Deactivated successfully. Feb 9 10:01:15.158363 systemd-logind[1429]: Session 14 logged out. Waiting for processes to exit. Feb 9 10:01:15.159699 systemd-logind[1429]: Removed session 14. Feb 9 10:01:15.222485 systemd[1]: Started sshd@12-10.200.20.12:22-10.200.12.6:36720.service. Feb 9 10:01:15.222000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.200.20.12:22-10.200.12.6:36720 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:01:15.643654 sshd[6093]: Accepted publickey for core from 10.200.12.6 port 36720 ssh2: RSA SHA256:hRMUPydBxGv30tVhZeI0xMvD4t7pzUsgMKn9+EnuIn0 Feb 9 10:01:15.643000 audit[6093]: USER_ACCT pid=6093 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 10:01:15.644000 audit[6093]: CRED_ACQ pid=6093 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 10:01:15.644000 audit[6093]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffc30a71b0 a2=3 a3=1 items=0 ppid=1 pid=6093 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 10:01:15.644000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Feb 9 10:01:15.645441 sshd[6093]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 10:01:15.650069 systemd[1]: Started session-15.scope. Feb 9 10:01:15.650615 systemd-logind[1429]: New session 15 of user core. Feb 9 10:01:15.657000 audit[6093]: USER_START pid=6093 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 10:01:15.659000 audit[6098]: CRED_ACQ pid=6098 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 10:01:16.053974 sshd[6093]: pam_unix(sshd:session): session closed for user core Feb 9 10:01:16.054000 audit[6093]: USER_END pid=6093 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 10:01:16.055000 audit[6093]: CRED_DISP pid=6093 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 10:01:16.059000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.200.20.12:22-10.200.12.6:36720 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:01:16.059082 systemd[1]: sshd@12-10.200.20.12:22-10.200.12.6:36720.service: Deactivated successfully. Feb 9 10:01:16.060802 systemd[1]: session-15.scope: Deactivated successfully. Feb 9 10:01:16.061441 systemd-logind[1429]: Session 15 logged out. Waiting for processes to exit. Feb 9 10:01:16.062239 systemd-logind[1429]: Removed session 15. Feb 9 10:01:20.614286 systemd[1]: run-containerd-runc-k8s.io-f5160b77947cadfadae2b25c5878a61febea174b54ea0da60bd689352dee80a8-runc.YZEuQT.mount: Deactivated successfully. Feb 9 10:01:20.650173 systemd[1]: run-containerd-runc-k8s.io-f5160b77947cadfadae2b25c5878a61febea174b54ea0da60bd689352dee80a8-runc.tWwaR3.mount: Deactivated successfully. Feb 9 10:01:21.130164 systemd[1]: Started sshd@13-10.200.20.12:22-10.200.12.6:47392.service. Feb 9 10:01:21.164192 kernel: kauditd_printk_skb: 35 callbacks suppressed Feb 9 10:01:21.164331 kernel: audit: type=1130 audit(1707472881.130:396): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.200.20.12:22-10.200.12.6:47392 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:01:21.130000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.200.20.12:22-10.200.12.6:47392 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:01:21.584000 audit[6150]: USER_ACCT pid=6150 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 10:01:21.584699 sshd[6150]: Accepted publickey for core from 10.200.12.6 port 47392 ssh2: RSA SHA256:hRMUPydBxGv30tVhZeI0xMvD4t7pzUsgMKn9+EnuIn0 Feb 9 10:01:21.615078 sshd[6150]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 10:01:21.613000 audit[6150]: CRED_ACQ pid=6150 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 10:01:21.645955 kernel: audit: type=1101 audit(1707472881.584:397): pid=6150 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 10:01:21.646090 kernel: audit: type=1103 audit(1707472881.613:398): pid=6150 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 10:01:21.662869 kernel: audit: type=1006 audit(1707472881.614:399): pid=6150 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=16 res=1 Feb 9 10:01:21.614000 audit[6150]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffceb7b680 a2=3 a3=1 items=0 ppid=1 pid=6150 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 10:01:21.692197 kernel: audit: type=1300 audit(1707472881.614:399): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffceb7b680 a2=3 a3=1 items=0 ppid=1 pid=6150 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 10:01:21.614000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Feb 9 10:01:21.695170 systemd[1]: Started session-16.scope. Feb 9 10:01:21.696376 systemd-logind[1429]: New session 16 of user core. Feb 9 10:01:21.709566 kernel: audit: type=1327 audit(1707472881.614:399): proctitle=737368643A20636F7265205B707269765D Feb 9 10:01:21.709772 kernel: audit: type=1105 audit(1707472881.705:400): pid=6150 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 10:01:21.705000 audit[6150]: USER_START pid=6150 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 10:01:21.708000 audit[6153]: CRED_ACQ pid=6153 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 10:01:21.766472 kernel: audit: type=1103 audit(1707472881.708:401): pid=6153 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 10:01:22.057866 sshd[6150]: pam_unix(sshd:session): session closed for user core Feb 9 10:01:22.058000 audit[6150]: USER_END pid=6150 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 10:01:22.061295 systemd-logind[1429]: Session 16 logged out. Waiting for processes to exit. Feb 9 10:01:22.062680 systemd[1]: sshd@13-10.200.20.12:22-10.200.12.6:47392.service: Deactivated successfully. Feb 9 10:01:22.063547 systemd[1]: session-16.scope: Deactivated successfully. Feb 9 10:01:22.065587 systemd-logind[1429]: Removed session 16. Feb 9 10:01:22.058000 audit[6150]: CRED_DISP pid=6150 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 10:01:22.119936 kernel: audit: type=1106 audit(1707472882.058:402): pid=6150 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 10:01:22.120097 kernel: audit: type=1104 audit(1707472882.058:403): pid=6150 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 10:01:22.058000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.200.20.12:22-10.200.12.6:47392 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:01:27.128000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.200.20.12:22-10.200.12.6:50142 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:01:27.129759 systemd[1]: Started sshd@14-10.200.20.12:22-10.200.12.6:50142.service. Feb 9 10:01:27.135775 kernel: kauditd_printk_skb: 1 callbacks suppressed Feb 9 10:01:27.135858 kernel: audit: type=1130 audit(1707472887.128:405): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.200.20.12:22-10.200.12.6:50142 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:01:27.549000 audit[6167]: USER_ACCT pid=6167 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 10:01:27.551074 sshd[6167]: Accepted publickey for core from 10.200.12.6 port 50142 ssh2: RSA SHA256:hRMUPydBxGv30tVhZeI0xMvD4t7pzUsgMKn9+EnuIn0 Feb 9 10:01:27.553439 sshd[6167]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 10:01:27.583907 systemd[1]: Started session-17.scope. Feb 9 10:01:27.584979 systemd-logind[1429]: New session 17 of user core. Feb 9 10:01:27.551000 audit[6167]: CRED_ACQ pid=6167 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 10:01:27.611820 kernel: audit: type=1101 audit(1707472887.549:406): pid=6167 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 10:01:27.614814 kernel: audit: type=1103 audit(1707472887.551:407): pid=6167 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 10:01:27.614912 kernel: audit: type=1006 audit(1707472887.551:408): pid=6167 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=17 res=1 Feb 9 10:01:27.551000 audit[6167]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffe72d6730 a2=3 a3=1 items=0 ppid=1 pid=6167 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 10:01:27.658154 kernel: audit: type=1300 audit(1707472887.551:408): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffe72d6730 a2=3 a3=1 items=0 ppid=1 pid=6167 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 10:01:27.551000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Feb 9 10:01:27.672497 kernel: audit: type=1327 audit(1707472887.551:408): proctitle=737368643A20636F7265205B707269765D Feb 9 10:01:27.612000 audit[6167]: USER_START pid=6167 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 10:01:27.704745 kernel: audit: type=1105 audit(1707472887.612:409): pid=6167 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 10:01:27.614000 audit[6171]: CRED_ACQ pid=6171 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 10:01:27.730764 kernel: audit: type=1103 audit(1707472887.614:410): pid=6171 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 10:01:27.918444 sshd[6167]: pam_unix(sshd:session): session closed for user core Feb 9 10:01:27.918000 audit[6167]: USER_END pid=6167 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 10:01:27.921771 systemd-logind[1429]: Session 17 logged out. Waiting for processes to exit. Feb 9 10:01:27.927693 systemd[1]: sshd@14-10.200.20.12:22-10.200.12.6:50142.service: Deactivated successfully. Feb 9 10:01:27.928610 systemd[1]: session-17.scope: Deactivated successfully. Feb 9 10:01:27.930315 systemd-logind[1429]: Removed session 17. Feb 9 10:01:27.918000 audit[6167]: CRED_DISP pid=6167 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 10:01:27.978253 kernel: audit: type=1106 audit(1707472887.918:411): pid=6167 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 10:01:27.978391 kernel: audit: type=1104 audit(1707472887.918:412): pid=6167 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 10:01:27.926000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.200.20.12:22-10.200.12.6:50142 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:01:32.993000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.200.20.12:22-10.200.12.6:50158 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:01:32.994292 systemd[1]: Started sshd@15-10.200.20.12:22-10.200.12.6:50158.service. Feb 9 10:01:33.000619 kernel: kauditd_printk_skb: 1 callbacks suppressed Feb 9 10:01:33.000699 kernel: audit: type=1130 audit(1707472892.993:414): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.200.20.12:22-10.200.12.6:50158 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:01:33.448000 audit[6196]: USER_ACCT pid=6196 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 10:01:33.449687 sshd[6196]: Accepted publickey for core from 10.200.12.6 port 50158 ssh2: RSA SHA256:hRMUPydBxGv30tVhZeI0xMvD4t7pzUsgMKn9+EnuIn0 Feb 9 10:01:33.451465 sshd[6196]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 10:01:33.457017 systemd[1]: Started session-18.scope. Feb 9 10:01:33.460957 systemd-logind[1429]: New session 18 of user core. Feb 9 10:01:33.449000 audit[6196]: CRED_ACQ pid=6196 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 10:01:33.511408 kernel: audit: type=1101 audit(1707472893.448:415): pid=6196 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 10:01:33.512421 kernel: audit: type=1103 audit(1707472893.449:416): pid=6196 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 10:01:33.513279 kernel: audit: type=1006 audit(1707472893.449:417): pid=6196 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=18 res=1 Feb 9 10:01:33.449000 audit[6196]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffe02b6d00 a2=3 a3=1 items=0 ppid=1 pid=6196 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 10:01:33.559485 kernel: audit: type=1300 audit(1707472893.449:417): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffe02b6d00 a2=3 a3=1 items=0 ppid=1 pid=6196 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 10:01:33.559603 kernel: audit: type=1327 audit(1707472893.449:417): proctitle=737368643A20636F7265205B707269765D Feb 9 10:01:33.449000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Feb 9 10:01:33.480000 audit[6196]: USER_START pid=6196 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 10:01:33.604053 kernel: audit: type=1105 audit(1707472893.480:418): pid=6196 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 10:01:33.481000 audit[6202]: CRED_ACQ pid=6202 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 10:01:33.633634 kernel: audit: type=1103 audit(1707472893.481:419): pid=6202 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 10:01:33.849540 sshd[6196]: pam_unix(sshd:session): session closed for user core Feb 9 10:01:33.849000 audit[6196]: USER_END pid=6196 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 10:01:33.853031 systemd[1]: sshd@15-10.200.20.12:22-10.200.12.6:50158.service: Deactivated successfully. Feb 9 10:01:33.853923 systemd[1]: session-18.scope: Deactivated successfully. Feb 9 10:01:33.850000 audit[6196]: CRED_DISP pid=6196 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 10:01:33.883936 systemd-logind[1429]: Session 18 logged out. Waiting for processes to exit. Feb 9 10:01:33.884844 systemd-logind[1429]: Removed session 18. Feb 9 10:01:33.908763 kernel: audit: type=1106 audit(1707472893.849:420): pid=6196 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 10:01:33.908864 kernel: audit: type=1104 audit(1707472893.850:421): pid=6196 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 10:01:33.852000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.200.20.12:22-10.200.12.6:50158 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:01:38.918000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.200.20.12:22-10.200.12.6:39644 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:01:38.919944 systemd[1]: Started sshd@16-10.200.20.12:22-10.200.12.6:39644.service. Feb 9 10:01:38.927734 kernel: kauditd_printk_skb: 1 callbacks suppressed Feb 9 10:01:38.927962 kernel: audit: type=1130 audit(1707472898.918:423): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.200.20.12:22-10.200.12.6:39644 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:01:39.381000 audit[6233]: USER_ACCT pid=6233 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 10:01:39.383100 sshd[6233]: Accepted publickey for core from 10.200.12.6 port 39644 ssh2: RSA SHA256:hRMUPydBxGv30tVhZeI0xMvD4t7pzUsgMKn9+EnuIn0 Feb 9 10:01:39.411330 kernel: audit: type=1101 audit(1707472899.381:424): pid=6233 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 10:01:39.410000 audit[6233]: CRED_ACQ pid=6233 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 10:01:39.412092 sshd[6233]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 10:01:39.417473 systemd[1]: Started session-19.scope. Feb 9 10:01:39.418357 systemd-logind[1429]: New session 19 of user core. Feb 9 10:01:39.465340 kernel: audit: type=1103 audit(1707472899.410:425): pid=6233 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 10:01:39.465449 kernel: audit: type=1006 audit(1707472899.410:426): pid=6233 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=19 res=1 Feb 9 10:01:39.410000 audit[6233]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffc0aab8b0 a2=3 a3=1 items=0 ppid=1 pid=6233 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 10:01:39.496725 kernel: audit: type=1300 audit(1707472899.410:426): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffc0aab8b0 a2=3 a3=1 items=0 ppid=1 pid=6233 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 10:01:39.410000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Feb 9 10:01:39.506684 kernel: audit: type=1327 audit(1707472899.410:426): proctitle=737368643A20636F7265205B707269765D Feb 9 10:01:39.506781 kernel: audit: type=1105 audit(1707472899.439:427): pid=6233 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 10:01:39.439000 audit[6233]: USER_START pid=6233 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 10:01:39.441000 audit[6236]: CRED_ACQ pid=6236 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 10:01:39.566843 kernel: audit: type=1103 audit(1707472899.441:428): pid=6236 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 10:01:39.802351 sshd[6233]: pam_unix(sshd:session): session closed for user core Feb 9 10:01:39.801000 audit[6233]: USER_END pid=6233 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 10:01:39.805188 systemd-logind[1429]: Session 19 logged out. Waiting for processes to exit. Feb 9 10:01:39.806441 systemd[1]: sshd@16-10.200.20.12:22-10.200.12.6:39644.service: Deactivated successfully. Feb 9 10:01:39.807267 systemd[1]: session-19.scope: Deactivated successfully. Feb 9 10:01:39.808678 systemd-logind[1429]: Removed session 19. Feb 9 10:01:39.802000 audit[6233]: CRED_DISP pid=6233 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 10:01:39.861287 kernel: audit: type=1106 audit(1707472899.801:429): pid=6233 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 10:01:39.861407 kernel: audit: type=1104 audit(1707472899.802:430): pid=6233 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 10:01:39.803000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.200.20.12:22-10.200.12.6:39644 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:01:39.871203 systemd[1]: Started sshd@17-10.200.20.12:22-10.200.12.6:39658.service. Feb 9 10:01:39.870000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.200.20.12:22-10.200.12.6:39658 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:01:40.289000 audit[6248]: USER_ACCT pid=6248 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 10:01:40.290663 sshd[6248]: Accepted publickey for core from 10.200.12.6 port 39658 ssh2: RSA SHA256:hRMUPydBxGv30tVhZeI0xMvD4t7pzUsgMKn9+EnuIn0 Feb 9 10:01:40.290000 audit[6248]: CRED_ACQ pid=6248 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 10:01:40.290000 audit[6248]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=fffff90f1df0 a2=3 a3=1 items=0 ppid=1 pid=6248 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=20 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 10:01:40.290000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Feb 9 10:01:40.292280 sshd[6248]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 10:01:40.296394 systemd-logind[1429]: New session 20 of user core. Feb 9 10:01:40.296870 systemd[1]: Started session-20.scope. Feb 9 10:01:40.300000 audit[6248]: USER_START pid=6248 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 10:01:40.301000 audit[6251]: CRED_ACQ pid=6251 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 10:01:40.783513 sshd[6248]: pam_unix(sshd:session): session closed for user core Feb 9 10:01:40.783000 audit[6248]: USER_END pid=6248 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 10:01:40.783000 audit[6248]: CRED_DISP pid=6248 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 10:01:40.786081 systemd-logind[1429]: Session 20 logged out. Waiting for processes to exit. Feb 9 10:01:40.786407 systemd[1]: sshd@17-10.200.20.12:22-10.200.12.6:39658.service: Deactivated successfully. Feb 9 10:01:40.787270 systemd[1]: session-20.scope: Deactivated successfully. Feb 9 10:01:40.785000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.200.20.12:22-10.200.12.6:39658 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:01:40.788482 systemd-logind[1429]: Removed session 20. Feb 9 10:01:40.855000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.200.20.12:22-10.200.12.6:39672 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:01:40.856946 systemd[1]: Started sshd@18-10.200.20.12:22-10.200.12.6:39672.service. Feb 9 10:01:41.311000 audit[6259]: USER_ACCT pid=6259 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 10:01:41.312798 sshd[6259]: Accepted publickey for core from 10.200.12.6 port 39672 ssh2: RSA SHA256:hRMUPydBxGv30tVhZeI0xMvD4t7pzUsgMKn9+EnuIn0 Feb 9 10:01:41.312000 audit[6259]: CRED_ACQ pid=6259 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 10:01:41.312000 audit[6259]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffe3a1f350 a2=3 a3=1 items=0 ppid=1 pid=6259 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=21 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 10:01:41.312000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Feb 9 10:01:41.314457 sshd[6259]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 10:01:41.318526 systemd-logind[1429]: New session 21 of user core. Feb 9 10:01:41.318908 systemd[1]: Started session-21.scope. Feb 9 10:01:41.322000 audit[6259]: USER_START pid=6259 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 10:01:41.323000 audit[6262]: CRED_ACQ pid=6262 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 10:01:42.214566 systemd[1]: run-containerd-runc-k8s.io-aafa52ec73b08f9dc59dc73cccf5dabd5ddcc86eb38b64a54a20f4bbb5bf6295-runc.pEYOWq.mount: Deactivated successfully. Feb 9 10:01:43.611000 audit[6340]: NETFILTER_CFG table=filter:153 family=2 entries=18 op=nft_register_rule pid=6340 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 10:01:43.611000 audit[6340]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=10364 a0=3 a1=ffffe8ccbba0 a2=0 a3=ffff9d7476c0 items=0 ppid=2831 pid=6340 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 10:01:43.611000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 10:01:43.612000 audit[6340]: NETFILTER_CFG table=nat:154 family=2 entries=94 op=nft_register_rule pid=6340 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 10:01:43.612000 audit[6340]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=30372 a0=3 a1=ffffe8ccbba0 a2=0 a3=ffff9d7476c0 items=0 ppid=2831 pid=6340 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 10:01:43.612000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 10:01:43.632949 sshd[6259]: pam_unix(sshd:session): session closed for user core Feb 9 10:01:43.632000 audit[6259]: USER_END pid=6259 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 10:01:43.632000 audit[6259]: CRED_DISP pid=6259 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 10:01:43.634000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.200.20.12:22-10.200.12.6:39672 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:01:43.635649 systemd[1]: sshd@18-10.200.20.12:22-10.200.12.6:39672.service: Deactivated successfully. Feb 9 10:01:43.637242 systemd[1]: session-21.scope: Deactivated successfully. Feb 9 10:01:43.637780 systemd-logind[1429]: Session 21 logged out. Waiting for processes to exit. Feb 9 10:01:43.638689 systemd-logind[1429]: Removed session 21. Feb 9 10:01:43.667000 audit[6368]: NETFILTER_CFG table=filter:155 family=2 entries=30 op=nft_register_rule pid=6368 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 10:01:43.667000 audit[6368]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=10364 a0=3 a1=ffffd7b29530 a2=0 a3=ffff8e4136c0 items=0 ppid=2831 pid=6368 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 10:01:43.667000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 10:01:43.669000 audit[6368]: NETFILTER_CFG table=nat:156 family=2 entries=94 op=nft_register_rule pid=6368 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 10:01:43.669000 audit[6368]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=30372 a0=3 a1=ffffd7b29530 a2=0 a3=ffff8e4136c0 items=0 ppid=2831 pid=6368 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 10:01:43.669000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 10:01:43.707276 systemd[1]: Started sshd@19-10.200.20.12:22-10.200.12.6:39686.service. Feb 9 10:01:43.706000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.200.20.12:22-10.200.12.6:39686 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:01:44.160000 audit[6369]: USER_ACCT pid=6369 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 10:01:44.161989 sshd[6369]: Accepted publickey for core from 10.200.12.6 port 39686 ssh2: RSA SHA256:hRMUPydBxGv30tVhZeI0xMvD4t7pzUsgMKn9+EnuIn0 Feb 9 10:01:44.163867 sshd[6369]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 10:01:44.167808 kernel: kauditd_printk_skb: 36 callbacks suppressed Feb 9 10:01:44.167911 kernel: audit: type=1101 audit(1707472904.160:455): pid=6369 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 10:01:44.162000 audit[6369]: CRED_ACQ pid=6369 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 10:01:44.223412 kernel: audit: type=1103 audit(1707472904.162:456): pid=6369 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 10:01:44.243376 kernel: audit: type=1006 audit(1707472904.162:457): pid=6369 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=22 res=1 Feb 9 10:01:44.162000 audit[6369]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=fffff434a620 a2=3 a3=1 items=0 ppid=1 pid=6369 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 10:01:44.274127 kernel: audit: type=1300 audit(1707472904.162:457): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=fffff434a620 a2=3 a3=1 items=0 ppid=1 pid=6369 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 10:01:44.162000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Feb 9 10:01:44.285376 kernel: audit: type=1327 audit(1707472904.162:457): proctitle=737368643A20636F7265205B707269765D Feb 9 10:01:44.287790 systemd[1]: Started session-22.scope. Feb 9 10:01:44.288004 systemd-logind[1429]: New session 22 of user core. Feb 9 10:01:44.291000 audit[6369]: USER_START pid=6369 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 10:01:44.293000 audit[6372]: CRED_ACQ pid=6372 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 10:01:44.352104 kernel: audit: type=1105 audit(1707472904.291:458): pid=6369 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 10:01:44.352213 kernel: audit: type=1103 audit(1707472904.293:459): pid=6372 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 10:01:44.774511 sshd[6369]: pam_unix(sshd:session): session closed for user core Feb 9 10:01:44.774000 audit[6369]: USER_END pid=6369 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 10:01:44.777215 systemd-logind[1429]: Session 22 logged out. Waiting for processes to exit. Feb 9 10:01:44.778914 systemd[1]: sshd@19-10.200.20.12:22-10.200.12.6:39686.service: Deactivated successfully. Feb 9 10:01:44.779776 systemd[1]: session-22.scope: Deactivated successfully. Feb 9 10:01:44.781205 systemd-logind[1429]: Removed session 22. Feb 9 10:01:44.774000 audit[6369]: CRED_DISP pid=6369 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 10:01:44.835735 kernel: audit: type=1106 audit(1707472904.774:460): pid=6369 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 10:01:44.835878 kernel: audit: type=1104 audit(1707472904.774:461): pid=6369 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 10:01:44.777000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.200.20.12:22-10.200.12.6:39686 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:01:44.843226 systemd[1]: Started sshd@20-10.200.20.12:22-10.200.12.6:39694.service. Feb 9 10:01:44.861779 kernel: audit: type=1131 audit(1707472904.777:462): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.200.20.12:22-10.200.12.6:39686 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:01:44.843000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.200.20.12:22-10.200.12.6:39694 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:01:45.261000 audit[6380]: USER_ACCT pid=6380 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 10:01:45.262786 sshd[6380]: Accepted publickey for core from 10.200.12.6 port 39694 ssh2: RSA SHA256:hRMUPydBxGv30tVhZeI0xMvD4t7pzUsgMKn9+EnuIn0 Feb 9 10:01:45.262000 audit[6380]: CRED_ACQ pid=6380 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 10:01:45.262000 audit[6380]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffdab9aec0 a2=3 a3=1 items=0 ppid=1 pid=6380 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 10:01:45.262000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Feb 9 10:01:45.264438 sshd[6380]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 10:01:45.268883 systemd[1]: Started session-23.scope. Feb 9 10:01:45.269075 systemd-logind[1429]: New session 23 of user core. Feb 9 10:01:45.271000 audit[6380]: USER_START pid=6380 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 10:01:45.273000 audit[6383]: CRED_ACQ pid=6383 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 10:01:45.623499 sshd[6380]: pam_unix(sshd:session): session closed for user core Feb 9 10:01:45.623000 audit[6380]: USER_END pid=6380 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 10:01:45.623000 audit[6380]: CRED_DISP pid=6380 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 10:01:45.627210 systemd[1]: sshd@20-10.200.20.12:22-10.200.12.6:39694.service: Deactivated successfully. Feb 9 10:01:45.628108 systemd[1]: session-23.scope: Deactivated successfully. Feb 9 10:01:45.628609 systemd-logind[1429]: Session 23 logged out. Waiting for processes to exit. Feb 9 10:01:45.626000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.200.20.12:22-10.200.12.6:39694 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:01:45.629350 systemd-logind[1429]: Removed session 23. Feb 9 10:01:49.755000 audit[6420]: NETFILTER_CFG table=filter:157 family=2 entries=18 op=nft_register_rule pid=6420 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 10:01:49.761635 kernel: kauditd_printk_skb: 11 callbacks suppressed Feb 9 10:01:49.761762 kernel: audit: type=1325 audit(1707472909.755:472): table=filter:157 family=2 entries=18 op=nft_register_rule pid=6420 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 10:01:49.755000 audit[6420]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=1916 a0=3 a1=fffffb1e96b0 a2=0 a3=ffffa9e166c0 items=0 ppid=2831 pid=6420 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 10:01:49.811745 kernel: audit: type=1300 audit(1707472909.755:472): arch=c00000b7 syscall=211 success=yes exit=1916 a0=3 a1=fffffb1e96b0 a2=0 a3=ffffa9e166c0 items=0 ppid=2831 pid=6420 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 10:01:49.755000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 10:01:49.828254 kernel: audit: type=1327 audit(1707472909.755:472): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 10:01:49.761000 audit[6420]: NETFILTER_CFG table=nat:158 family=2 entries=178 op=nft_register_chain pid=6420 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 10:01:49.846181 kernel: audit: type=1325 audit(1707472909.761:473): table=nat:158 family=2 entries=178 op=nft_register_chain pid=6420 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 10:01:49.846352 kernel: audit: type=1300 audit(1707472909.761:473): arch=c00000b7 syscall=211 success=yes exit=72324 a0=3 a1=fffffb1e96b0 a2=0 a3=ffffa9e166c0 items=0 ppid=2831 pid=6420 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 10:01:49.761000 audit[6420]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=72324 a0=3 a1=fffffb1e96b0 a2=0 a3=ffffa9e166c0 items=0 ppid=2831 pid=6420 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 10:01:49.761000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 10:01:49.897274 kernel: audit: type=1327 audit(1707472909.761:473): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 10:01:50.621566 systemd[1]: run-containerd-runc-k8s.io-f5160b77947cadfadae2b25c5878a61febea174b54ea0da60bd689352dee80a8-runc.DqkdaQ.mount: Deactivated successfully. Feb 9 10:01:50.698000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.200.20.12:22-10.200.12.6:42394 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:01:50.698674 systemd[1]: Started sshd@21-10.200.20.12:22-10.200.12.6:42394.service. Feb 9 10:01:50.738434 kernel: audit: type=1130 audit(1707472910.698:474): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.200.20.12:22-10.200.12.6:42394 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:01:51.173198 sshd[6441]: Accepted publickey for core from 10.200.12.6 port 42394 ssh2: RSA SHA256:hRMUPydBxGv30tVhZeI0xMvD4t7pzUsgMKn9+EnuIn0 Feb 9 10:01:51.172000 audit[6441]: USER_ACCT pid=6441 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 10:01:51.174698 sshd[6441]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 10:01:51.204337 kernel: audit: type=1101 audit(1707472911.172:475): pid=6441 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 10:01:51.204448 kernel: audit: type=1103 audit(1707472911.174:476): pid=6441 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 10:01:51.174000 audit[6441]: CRED_ACQ pid=6441 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 10:01:51.252603 kernel: audit: type=1006 audit(1707472911.174:477): pid=6441 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=24 res=1 Feb 9 10:01:51.174000 audit[6441]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffc3008770 a2=3 a3=1 items=0 ppid=1 pid=6441 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 10:01:51.174000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Feb 9 10:01:51.255379 systemd-logind[1429]: New session 24 of user core. Feb 9 10:01:51.256242 systemd[1]: Started session-24.scope. Feb 9 10:01:51.262000 audit[6441]: USER_START pid=6441 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 10:01:51.264000 audit[6444]: CRED_ACQ pid=6444 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 10:01:51.578621 sshd[6441]: pam_unix(sshd:session): session closed for user core Feb 9 10:01:51.579000 audit[6441]: USER_END pid=6441 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 10:01:51.579000 audit[6441]: CRED_DISP pid=6441 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 10:01:51.581000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.200.20.12:22-10.200.12.6:42394 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:01:51.581513 systemd[1]: sshd@21-10.200.20.12:22-10.200.12.6:42394.service: Deactivated successfully. Feb 9 10:01:51.582789 systemd-logind[1429]: Session 24 logged out. Waiting for processes to exit. Feb 9 10:01:51.582857 systemd[1]: session-24.scope: Deactivated successfully. Feb 9 10:01:51.583731 systemd-logind[1429]: Removed session 24. Feb 9 10:01:56.681119 kernel: kauditd_printk_skb: 7 callbacks suppressed Feb 9 10:01:56.681254 kernel: audit: type=1130 audit(1707472916.647:483): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.200.20.12:22-10.200.12.6:42400 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:01:56.647000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.200.20.12:22-10.200.12.6:42400 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:01:56.647148 systemd[1]: Started sshd@22-10.200.20.12:22-10.200.12.6:42400.service. Feb 9 10:01:57.060000 audit[6453]: USER_ACCT pid=6453 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 10:01:57.060940 sshd[6453]: Accepted publickey for core from 10.200.12.6 port 42400 ssh2: RSA SHA256:hRMUPydBxGv30tVhZeI0xMvD4t7pzUsgMKn9+EnuIn0 Feb 9 10:01:57.062600 sshd[6453]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 10:01:57.061000 audit[6453]: CRED_ACQ pid=6453 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 10:01:57.122284 kernel: audit: type=1101 audit(1707472917.060:484): pid=6453 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 10:01:57.122437 kernel: audit: type=1103 audit(1707472917.061:485): pid=6453 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 10:01:57.142174 kernel: audit: type=1006 audit(1707472917.062:486): pid=6453 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=25 res=1 Feb 9 10:01:57.062000 audit[6453]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffe198c290 a2=3 a3=1 items=0 ppid=1 pid=6453 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=25 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 10:01:57.174207 kernel: audit: type=1300 audit(1707472917.062:486): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffe198c290 a2=3 a3=1 items=0 ppid=1 pid=6453 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=25 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 10:01:57.062000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Feb 9 10:01:57.186874 kernel: audit: type=1327 audit(1707472917.062:486): proctitle=737368643A20636F7265205B707269765D Feb 9 10:01:57.187369 systemd-logind[1429]: New session 25 of user core. Feb 9 10:01:57.188033 systemd[1]: Started session-25.scope. Feb 9 10:01:57.192000 audit[6453]: USER_START pid=6453 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 10:01:57.226000 audit[6456]: CRED_ACQ pid=6456 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 10:01:57.255469 kernel: audit: type=1105 audit(1707472917.192:487): pid=6453 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 10:01:57.255618 kernel: audit: type=1103 audit(1707472917.226:488): pid=6456 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 10:01:57.521534 sshd[6453]: pam_unix(sshd:session): session closed for user core Feb 9 10:01:57.522000 audit[6453]: USER_END pid=6453 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 10:01:57.525066 systemd[1]: sshd@22-10.200.20.12:22-10.200.12.6:42400.service: Deactivated successfully. Feb 9 10:01:57.525989 systemd[1]: session-25.scope: Deactivated successfully. Feb 9 10:01:57.523000 audit[6453]: CRED_DISP pid=6453 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 10:01:57.591210 kernel: audit: type=1106 audit(1707472917.522:489): pid=6453 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 10:01:57.591825 kernel: audit: type=1104 audit(1707472917.523:490): pid=6453 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 10:01:57.525000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.200.20.12:22-10.200.12.6:42400 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:01:57.591446 systemd-logind[1429]: Session 25 logged out. Waiting for processes to exit. Feb 9 10:01:57.592508 systemd-logind[1429]: Removed session 25. Feb 9 10:02:02.592000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.200.20.12:22-10.200.12.6:56652 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:02:02.594030 systemd[1]: Started sshd@23-10.200.20.12:22-10.200.12.6:56652.service. Feb 9 10:02:02.603838 kernel: kauditd_printk_skb: 1 callbacks suppressed Feb 9 10:02:02.603948 kernel: audit: type=1130 audit(1707472922.592:492): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.200.20.12:22-10.200.12.6:56652 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:02:03.005000 audit[6468]: USER_ACCT pid=6468 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 10:02:03.007476 sshd[6468]: Accepted publickey for core from 10.200.12.6 port 56652 ssh2: RSA SHA256:hRMUPydBxGv30tVhZeI0xMvD4t7pzUsgMKn9+EnuIn0 Feb 9 10:02:03.039871 kernel: audit: type=1101 audit(1707472923.005:493): pid=6468 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 10:02:03.039971 kernel: audit: type=1103 audit(1707472923.036:494): pid=6468 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 10:02:03.036000 audit[6468]: CRED_ACQ pid=6468 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 10:02:03.039725 sshd[6468]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 10:02:03.084292 kernel: audit: type=1006 audit(1707472923.036:495): pid=6468 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=26 res=1 Feb 9 10:02:03.036000 audit[6468]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffe4ed3380 a2=3 a3=1 items=0 ppid=1 pid=6468 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=26 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 10:02:03.114913 kernel: audit: type=1300 audit(1707472923.036:495): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffe4ed3380 a2=3 a3=1 items=0 ppid=1 pid=6468 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=26 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 10:02:03.036000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Feb 9 10:02:03.125686 kernel: audit: type=1327 audit(1707472923.036:495): proctitle=737368643A20636F7265205B707269765D Feb 9 10:02:03.129168 systemd[1]: Started session-26.scope. Feb 9 10:02:03.129568 systemd-logind[1429]: New session 26 of user core. Feb 9 10:02:03.132000 audit[6468]: USER_START pid=6468 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 10:02:03.166000 audit[6471]: CRED_ACQ pid=6471 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 10:02:03.195808 kernel: audit: type=1105 audit(1707472923.132:496): pid=6468 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 10:02:03.195928 kernel: audit: type=1103 audit(1707472923.166:497): pid=6471 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 10:02:03.461510 sshd[6468]: pam_unix(sshd:session): session closed for user core Feb 9 10:02:03.461000 audit[6468]: USER_END pid=6468 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 10:02:03.465023 systemd[1]: sshd@23-10.200.20.12:22-10.200.12.6:56652.service: Deactivated successfully. Feb 9 10:02:03.466026 systemd[1]: session-26.scope: Deactivated successfully. Feb 9 10:02:03.462000 audit[6468]: CRED_DISP pid=6468 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 10:02:03.524749 kernel: audit: type=1106 audit(1707472923.461:498): pid=6468 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 10:02:03.524912 kernel: audit: type=1104 audit(1707472923.462:499): pid=6468 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 10:02:03.464000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.200.20.12:22-10.200.12.6:56652 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:02:03.525594 systemd-logind[1429]: Session 26 logged out. Waiting for processes to exit. Feb 9 10:02:03.526514 systemd-logind[1429]: Removed session 26. Feb 9 10:02:06.042141 systemd[1]: run-containerd-runc-k8s.io-91d63ce072a914b20a5508b3e18aa2e28df7768cd29968d1054fb74323b1398c-runc.KE4HbS.mount: Deactivated successfully. Feb 9 10:02:08.528000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-10.200.20.12:22-10.200.12.6:32946 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:02:08.529788 systemd[1]: Started sshd@24-10.200.20.12:22-10.200.12.6:32946.service. Feb 9 10:02:08.536006 kernel: kauditd_printk_skb: 1 callbacks suppressed Feb 9 10:02:08.536105 kernel: audit: type=1130 audit(1707472928.528:501): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-10.200.20.12:22-10.200.12.6:32946 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:02:08.944000 audit[6504]: USER_ACCT pid=6504 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 10:02:08.946581 sshd[6504]: Accepted publickey for core from 10.200.12.6 port 32946 ssh2: RSA SHA256:hRMUPydBxGv30tVhZeI0xMvD4t7pzUsgMKn9+EnuIn0 Feb 9 10:02:08.976408 kernel: audit: type=1101 audit(1707472928.944:502): pid=6504 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 10:02:08.975000 audit[6504]: CRED_ACQ pid=6504 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 10:02:08.977560 sshd[6504]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 10:02:09.023752 kernel: audit: type=1103 audit(1707472928.975:503): pid=6504 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 10:02:09.023896 kernel: audit: type=1006 audit(1707472928.975:504): pid=6504 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=27 res=1 Feb 9 10:02:08.975000 audit[6504]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffe4646f90 a2=3 a3=1 items=0 ppid=1 pid=6504 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=27 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 10:02:09.053963 kernel: audit: type=1300 audit(1707472928.975:504): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffe4646f90 a2=3 a3=1 items=0 ppid=1 pid=6504 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=27 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 10:02:08.975000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Feb 9 10:02:09.069828 kernel: audit: type=1327 audit(1707472928.975:504): proctitle=737368643A20636F7265205B707269765D Feb 9 10:02:09.073846 systemd-logind[1429]: New session 27 of user core. Feb 9 10:02:09.074553 systemd[1]: Started session-27.scope. Feb 9 10:02:09.079000 audit[6504]: USER_START pid=6504 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 10:02:09.112000 audit[6507]: CRED_ACQ pid=6507 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 10:02:09.141443 kernel: audit: type=1105 audit(1707472929.079:505): pid=6504 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 10:02:09.141572 kernel: audit: type=1103 audit(1707472929.112:506): pid=6507 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 10:02:09.409917 sshd[6504]: pam_unix(sshd:session): session closed for user core Feb 9 10:02:09.409000 audit[6504]: USER_END pid=6504 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 10:02:09.446836 systemd[1]: sshd@24-10.200.20.12:22-10.200.12.6:32946.service: Deactivated successfully. Feb 9 10:02:09.447688 systemd[1]: session-27.scope: Deactivated successfully. Feb 9 10:02:09.410000 audit[6504]: CRED_DISP pid=6504 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 10:02:09.449437 kernel: audit: type=1106 audit(1707472929.409:507): pid=6504 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 10:02:09.449540 systemd-logind[1429]: Session 27 logged out. Waiting for processes to exit. Feb 9 10:02:09.445000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-10.200.20.12:22-10.200.12.6:32946 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:02:09.476389 kernel: audit: type=1104 audit(1707472929.410:508): pid=6504 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 10:02:09.477012 systemd-logind[1429]: Removed session 27. Feb 9 10:02:12.203208 systemd[1]: run-containerd-runc-k8s.io-aafa52ec73b08f9dc59dc73cccf5dabd5ddcc86eb38b64a54a20f4bbb5bf6295-runc.qQHQOW.mount: Deactivated successfully. Feb 9 10:02:14.477344 systemd[1]: Started sshd@25-10.200.20.12:22-10.200.12.6:32960.service. Feb 9 10:02:14.476000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-10.200.20.12:22-10.200.12.6:32960 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:02:14.488641 kernel: kauditd_printk_skb: 1 callbacks suppressed Feb 9 10:02:14.511345 kernel: audit: type=1130 audit(1707472934.476:510): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-10.200.20.12:22-10.200.12.6:32960 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:02:14.895000 audit[6558]: USER_ACCT pid=6558 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 10:02:14.896736 sshd[6558]: Accepted publickey for core from 10.200.12.6 port 32960 ssh2: RSA SHA256:hRMUPydBxGv30tVhZeI0xMvD4t7pzUsgMKn9+EnuIn0 Feb 9 10:02:14.925375 kernel: audit: type=1101 audit(1707472934.895:511): pid=6558 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 10:02:14.924000 audit[6558]: CRED_ACQ pid=6558 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 10:02:14.926656 sshd[6558]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 10:02:14.971696 kernel: audit: type=1103 audit(1707472934.924:512): pid=6558 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 10:02:14.971802 kernel: audit: type=1006 audit(1707472934.925:513): pid=6558 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=28 res=1 Feb 9 10:02:14.925000 audit[6558]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffde4f2230 a2=3 a3=1 items=0 ppid=1 pid=6558 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=28 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 10:02:15.001656 kernel: audit: type=1300 audit(1707472934.925:513): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffde4f2230 a2=3 a3=1 items=0 ppid=1 pid=6558 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=28 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 10:02:14.925000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Feb 9 10:02:15.011956 kernel: audit: type=1327 audit(1707472934.925:513): proctitle=737368643A20636F7265205B707269765D Feb 9 10:02:15.014056 systemd-logind[1429]: New session 28 of user core. Feb 9 10:02:15.014502 systemd[1]: Started session-28.scope. Feb 9 10:02:15.018000 audit[6558]: USER_START pid=6558 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 10:02:15.019000 audit[6561]: CRED_ACQ pid=6561 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 10:02:15.079152 kernel: audit: type=1105 audit(1707472935.018:514): pid=6558 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 10:02:15.079280 kernel: audit: type=1103 audit(1707472935.019:515): pid=6561 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 10:02:15.324114 sshd[6558]: pam_unix(sshd:session): session closed for user core Feb 9 10:02:15.323000 audit[6558]: USER_END pid=6558 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 10:02:15.328119 systemd[1]: sshd@25-10.200.20.12:22-10.200.12.6:32960.service: Deactivated successfully. Feb 9 10:02:15.329037 systemd[1]: session-28.scope: Deactivated successfully. Feb 9 10:02:15.325000 audit[6558]: CRED_DISP pid=6558 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 10:02:15.382778 kernel: audit: type=1106 audit(1707472935.323:516): pid=6558 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 10:02:15.382903 kernel: audit: type=1104 audit(1707472935.325:517): pid=6558 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 10:02:15.382862 systemd-logind[1429]: Session 28 logged out. Waiting for processes to exit. Feb 9 10:02:15.327000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-10.200.20.12:22-10.200.12.6:32960 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:02:15.384016 systemd-logind[1429]: Removed session 28. Feb 9 10:02:20.405397 systemd[1]: Started sshd@26-10.200.20.12:22-10.200.12.6:59884.service. Feb 9 10:02:20.404000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@26-10.200.20.12:22-10.200.12.6:59884 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:02:20.412111 kernel: kauditd_printk_skb: 1 callbacks suppressed Feb 9 10:02:20.412187 kernel: audit: type=1130 audit(1707472940.404:519): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@26-10.200.20.12:22-10.200.12.6:59884 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:02:20.851000 audit[6573]: USER_ACCT pid=6573 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 10:02:20.854574 sshd[6573]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 10:02:20.855849 sshd[6573]: Accepted publickey for core from 10.200.12.6 port 59884 ssh2: RSA SHA256:hRMUPydBxGv30tVhZeI0xMvD4t7pzUsgMKn9+EnuIn0 Feb 9 10:02:20.852000 audit[6573]: CRED_ACQ pid=6573 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 10:02:20.910513 kernel: audit: type=1101 audit(1707472940.851:520): pid=6573 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 10:02:20.910657 kernel: audit: type=1103 audit(1707472940.852:521): pid=6573 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 10:02:20.927072 kernel: audit: type=1006 audit(1707472940.852:522): pid=6573 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=29 res=1 Feb 9 10:02:20.852000 audit[6573]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffe790db40 a2=3 a3=1 items=0 ppid=1 pid=6573 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=29 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 10:02:20.930817 systemd[1]: Started session-29.scope. Feb 9 10:02:20.931750 systemd-logind[1429]: New session 29 of user core. Feb 9 10:02:20.955999 kernel: audit: type=1300 audit(1707472940.852:522): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffe790db40 a2=3 a3=1 items=0 ppid=1 pid=6573 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=29 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 10:02:20.852000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Feb 9 10:02:20.966091 kernel: audit: type=1327 audit(1707472940.852:522): proctitle=737368643A20636F7265205B707269765D Feb 9 10:02:20.966182 kernel: audit: type=1105 audit(1707472940.956:523): pid=6573 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 10:02:20.956000 audit[6573]: USER_START pid=6573 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 10:02:20.957000 audit[6617]: CRED_ACQ pid=6617 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 10:02:21.021069 kernel: audit: type=1103 audit(1707472940.957:524): pid=6617 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 10:02:21.267940 sshd[6573]: pam_unix(sshd:session): session closed for user core Feb 9 10:02:21.268000 audit[6573]: USER_END pid=6573 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 10:02:21.271239 systemd-logind[1429]: Session 29 logged out. Waiting for processes to exit. Feb 9 10:02:21.272649 systemd[1]: sshd@26-10.200.20.12:22-10.200.12.6:59884.service: Deactivated successfully. Feb 9 10:02:21.273533 systemd[1]: session-29.scope: Deactivated successfully. Feb 9 10:02:21.275195 systemd-logind[1429]: Removed session 29. Feb 9 10:02:21.268000 audit[6573]: CRED_DISP pid=6573 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 10:02:21.327149 kernel: audit: type=1106 audit(1707472941.268:525): pid=6573 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 10:02:21.327280 kernel: audit: type=1104 audit(1707472941.268:526): pid=6573 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 10:02:21.268000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@26-10.200.20.12:22-10.200.12.6:59884 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'